1
|
Ren J, Zhang Z, Zhao S, Fan J, Zhao Z, Zhao Y, Hong R, Wang M. When low-light meets flares: Towards Synchronous Flare Removal and Brightness Enhancement. Neural Netw 2025; 185:107149. [PMID: 39855004 DOI: 10.1016/j.neunet.2025.107149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2024] [Revised: 12/26/2024] [Accepted: 01/10/2025] [Indexed: 01/27/2025]
Abstract
Low-light image enhancement (LLIE) aims to improve the visibility and illumination of low-light images. However, real-world low-light images are usually accompanied with flares caused by light sources, which make it difficult to discern the content of dark images. In this case, current LLIE and nighttime flare removal methods face challenges in handling these flared low-light images effectively: (1) Flares in dark images will disturb the content of images and cause uneven lighting, potentially resulting in overexposure or chromatic aberration; (2) the slight noise in low-light images may be amplified during the process of enhancement, leading to speckle noise and blur in the enhanced images; (3) the nighttime flare removal methods usually ignore the detailed information in dark regions, which may cause inaccurate representation. To tackle the above challenges yet meaningful problems well, we propose a novel image enhancement task called Flared Low-Light Image Enhancement (FLLIE). We first synthesize several flared low-light datasets as the training/inference data, based on which we develop a novel Fourier transform-based deep FLLIE network termed Synchronous Flare Removal and Brightness Enhancement (SFRBE). Specifically, a Residual Directional Fourier Block (RDFB) is introduced that learns in the frequency domain to extract accurate global information and capture detailed features from multiple directions. Extensive experiments on three flared low-light datasets and some real flared low-light images demonstrate the effectiveness of SFRBE for FLLIE.
Collapse
Affiliation(s)
- Jiahuan Ren
- Huaiyin Institute of Technology, Huaian, 223003, China; Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China.
| | - Zhao Zhang
- Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China
| | - Suiyi Zhao
- Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China
| | - Jicong Fan
- Chinese University of Hong Kong, Shenzhen, Shenzhen, 518172, China
| | - Zhongqiu Zhao
- Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China
| | - Yang Zhao
- Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China
| | - Richang Hong
- Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China
| | - Meng Wang
- Hefei University of Technology, Hefei, 230601, China; The Key Laboratory of Knowledge Engineering with Big Data, Ministry of Education, Hefei, 230601, China
| |
Collapse
|
2
|
Wu Z, Zou Y, Liu B, Li Z, Ji D, Zhang H. Transferring enhanced material knowledge via image quality enhancement and feature distillation for pavement condition identification. Sci Rep 2025; 15:13668. [PMID: 40258955 PMCID: PMC12012040 DOI: 10.1038/s41598-025-98484-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2024] [Accepted: 04/11/2025] [Indexed: 04/23/2025] Open
Abstract
In the context of rapid advancements in autonomous driving technology, ensuring passengers' safety and comfort has become a priority. Obstacle or road detection systems, especially accurate pavement condition identification in unfavorable weather or time circumstances, play a crucial role in the safe operation and comfortable riding experience of autonomous vehicles. To this end, we propose a novel framework based on image quality enhancement and feature distillation (IQEFD) for detecting diverse pavement conditions during the day and night to achieve state classification. The IQEFD model first leverages ConvNeXt as its backbone to extract high-quality basic features. Then, a bidirectional fusion module embedded with a hybrid attention mechanism (HAM) is devised to effectively extract multi-scale refined features, thereby mitigating information loss during continuous upsampling and downsampling. Subsequently, the refined features are fused with the enhanced features extracted through the image enhancement network Zero-DCE to generate the fused attention features. Lastly, the enhanced features serve as the guidance online for the fused attention features through feature distillation, transferring enhanced material knowledge and achieving alignment between feature representations. Extensive experimental results on two publicly available datasets validate that IQEFD can accurately classify a variety of pavement conditions, including dry, wet, and snowy conditions, especially showing satisfactory and robust performance in noisy nighttime images. In detail, the IQEFD model achieves the accuracies of 98.04% and 98.68% on the YouTube-w-ALI and YouTube-w/o-ALI datasets, respectively, outperforming the state-of-the-art baselines. It is worth noting that IQEFD has a certain generalization ability on a classical material image dataset named MattrSet, with an average accuracy of 75.86%. This study provides a novel insight into pavement condition identification. The source code of IQEFD will be made available at https://github.com/rainzyx/IQEFD .
Collapse
Affiliation(s)
- Zejiu Wu
- School of Science, East China Jiaotong University, Nanchang, 330013, China.
| | - Yuxing Zou
- School of Information and Software Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Boyang Liu
- School of Information and Software Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Zhijie Li
- School of Information and Software Engineering, East China Jiaotong University, Nanchang, 330013, China
| | - Donghong Ji
- Cyber Science and Engineering School, Wuhan University, Wuhan, 430072, China
| | - Hongbin Zhang
- School of Information and Software Engineering, East China Jiaotong University, Nanchang, 330013, China
| |
Collapse
|
3
|
He M, Wang R, Zhang M, Lv F, Wang Y, Zhou F, Bian X. SwinLightGAN a study of low-light image enhancement algorithms using depth residuals and transformer techniques. Sci Rep 2025; 15:12151. [PMID: 40204793 PMCID: PMC11982214 DOI: 10.1038/s41598-025-95329-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2024] [Accepted: 03/20/2025] [Indexed: 04/11/2025] Open
Abstract
Contemporary algorithms for enhancing images in low-light conditions prioritize improving brightness and contrast but often neglect improving image details. This study introduces the Swin Transformer-based Light-enhancing Generative Adversarial Network (SwinLightGAN), a novel generative adversarial network (GAN) that effectively enhances image details under low-light conditions. The network integrates a generator model based on a Residual Jumping U-shaped Network (U-Net) architecture for precise local detail extraction with an illumination network enhanced by Shifted Window Transformer (Swin Transformer) technology that captures multi-scale spatial features and global contexts. This combination produces high-quality images that resemble those taken in normal lighting conditions, retaining intricate details. Through adversarial training that employs discriminators operating at multiple scales and a blend of loss functions, SwinLightGAN ensures a seamless distinction between generated and authentic images, ensuring superior enhancement quality. Extensive experimental analysis on multiple unpaired datasets demonstrates SwinLightGAN's outstanding performance. The system achieves Naturalness Image Quality Evaluator (NIQE) scores ranging from 5.193 to 5.397, Blind/Referenceless Image Spatial Quality Evaluator (BRISQUE) scores from 28.879 to 32.040, and Patch-based Image Quality Evaluator (PIQE) scores from 38.280 to 44.479, highlighting its efficacy in delivering high-quality enhancements across diverse metrics.
Collapse
Affiliation(s)
- Min He
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China
| | - Rugang Wang
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China.
| | - Mingyang Zhang
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China
| | - Feiyang Lv
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China
| | - Yuanyuan Wang
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China
| | - Feng Zhou
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China
| | - Xuesheng Bian
- School of Information Engineering, Yancheng Institute of Technology, Yancheng, 224051, China
| |
Collapse
|
4
|
Wu W, Weng J, Zhang P, Wang X, Yang W, Jiang J. Interpretable Optimization-Inspired Unfolding Network for Low-Light Image Enhancement. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2025; 47:2545-2562. [PMID: 40030787 DOI: 10.1109/tpami.2024.3524538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
Retinex model-based methods have shown to be effective in layer-wise manipulation with well-designed priors for low-light image enhancement (LLIE). However, the hand-crafted priors and conventional optimization algorithm adopted to solve the layer decomposition problem result in the lack of adaptivity and efficiency. To this end, this paper proposes a Retinex-based deep unfolding network (URetinex-Net++), which unfolds an optimization problem into a learnable network to decompose a low-light image into reflectance and illumination layers. By formulating the decomposition problem as an implicit priors regularized model, three learning-based modules are carefully designed, responsible for data-dependent initialization, high-efficient unfolding optimization, and fairly-flexible component adjustment, respectively. Particularly, the proposed unfolding optimization module, introducing two networks to adaptively fit implicit priors in the data-driven manner, can realize noise suppression and details preservation for decomposed components. URetinex-Net++ is a further augmented version of URetinex-Net, which introduces a cross-stage fusion block to alleviate the color defect in URetinex-Net. Therefore, boosted performance on LLIE can be obtained in both visual quality and quantitative metrics, where only a few parameters are introduced and little time is cost. Extensive experiments on real-world low-light images qualitatively and quantitatively demonstrate the effectiveness and superiority of the proposed URetinex-Net++ over state-of-the-art methods.
Collapse
|
5
|
Zhang Z, Zhao S, Jin X, Xu M, Yang Y, Yan S, Wang M. Noise Self-Regression: A New Learning Paradigm to Enhance Low-Light Images Without Task-Related Data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2025; 47:1073-1088. [PMID: 39466857 DOI: 10.1109/tpami.2024.3487361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/30/2024]
Abstract
Deep learning-based low-light image enhancement (LLIE) is a task of leveraging deep neural networks to enhance the image illumination while keeping the image content unchanged. From the perspective of training data, existing methods complete the LLIE task driven by one of the following three data types: paired data, unpaired data and zero-reference data. Each type of these data-driven methods has its own advantages, e.g., zero-reference data-based methods have very low requirements on training data and can meet the human needs in many scenarios. In this paper, we leverage pure Gaussian noise to complete the LLIE task, which further reduces the requirements for training data in LLIE tasks and can be used as another alternative in practical use. Specifically, we propose Noise SElf-Regression (NoiSER) without access to any task-related data, simply learns a convolutional neural network equipped with an instance-normalization layer by taking a random noise image, for each pixel, as both input and output for each training pair, and then the low-light image is fed to the trained network for predicting the normal-light image. Technically, an intuitive explanation for its effectiveness is as follows: 1) the self-regression reconstructs the contrast between adjacent pixels of the input image, 2) the instance-normalization layer may naturally remediate the overall magnitude/lighting of the input image, and 3) the assumption for each pixel enforces the output image to follow the well-known gray-world hypothesis (Buchsbaum, 1980) when the image size is big enough. Compared to current state-of-the-art LLIE methods with access to different task-related data, NoiSER is highly competitive in enhancement quality, yet with a much smaller model size, and much lower training and inference cost. In addition, the experiments also demonstrate that NoiSER has great potential in overexposure suppression and joint processing with other restoration tasks.
Collapse
|
6
|
Han F, Chang K, Li G, Ling M, Huang M, Gao Z. Illumination-aware divide-and-conquer network for improperly-exposed image enhancement. Neural Netw 2024; 180:106733. [PMID: 39293177 DOI: 10.1016/j.neunet.2024.106733] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Revised: 08/09/2024] [Accepted: 09/10/2024] [Indexed: 09/20/2024]
Abstract
Improperly-exposed images often have unsatisfactory visual characteristics like inadequate illumination, low contrast, and the loss of small structures and details. The mapping relationship from an improperly-exposed condition to a well-exposed one may vary significantly due to the presence of multiple exposure conditions. Consequently, the enhancement methods that do not pay specific attention to this issue tend to yield inconsistent results when applied to the same scene under different exposure conditions. In order to obtain consistent enhancement results for various exposures while restoring rich details, we propose an illumination-aware divide-and-conquer network (IDNet). Specifically, to address the challenge of directly learning a sophisticated nonlinear mapping from an improperly-exposed condition to a well-exposed one, we utilize the discrete wavelet transform (DWT) to decompose the image into the low-frequency (LF) component, which primarily captures brightness and contrast, and the high-frequency (HF) components that depict fine-scale structures. To mitigate the inconsistency in correction across various exposures, we extract a conditional feature from the input that represents illumination-related global information. This feature is then utilized to modulate the dynamic convolution weights, enabling precise correction of the LF component. Furthermore, as the co-located positions of LF and HF components are highly correlated, we create a mask to distill useful knowledge from the corrected LF component, and integrate it into the HF component to support the restoration of fine-scale details. Extensive experimental results demonstrate that the proposed IDNet is superior to several state-of-the-art enhancement methods on two datasets with multiple exposures.
Collapse
Affiliation(s)
- Fenggang Han
- School of Computer and Electronic Information, Guangxi University, Nanning 530004, China.
| | - Kan Chang
- School of Computer and Electronic Information, Guangxi University, Nanning 530004, China.
| | - Guiqing Li
- School of Computer Science and Engineering, South China University of Technology, Guangzhou 510006, China.
| | - Mingyang Ling
- School of Computer and Electronic Information, Guangxi University, Nanning 530004, China.
| | - Mengyuan Huang
- School of Computer and Electronic Information, Guangxi University, Nanning 530004, China.
| | - Zan Gao
- School of Computer Science and Engineering, Tianjin University of Technology, Tianjin 300384, China.
| |
Collapse
|
7
|
Wu T, Wu W, Yang Y, Fan FL, Zeng T. Retinex Image Enhancement Based on Sequential Decomposition With a Plug-and-Play Framework. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:14559-14572. [PMID: 37279121 DOI: 10.1109/tnnls.2023.3280037] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The Retinex model is one of the most representative and effective methods for low-light image enhancement. However, the Retinex model does not explicitly tackle the noise problem and shows unsatisfactory enhancing results. In recent years, due to the excellent performance, deep learning models have been widely used in low-light image enhancement. However, these methods have two limitations. First, the desirable performance can only be achieved by deep learning when a large number of labeled data are available. However, it is not easy to curate massive low-/normal-light paired data. Second, deep learning is notoriously a black-box model. It is difficult to explain their inner working mechanism and understand their behaviors. In this article, using a sequential Retinex decomposition strategy, we design a plug-and-play framework based on the Retinex theory for simultaneous image enhancement and noise removal. Meanwhile, we develop a convolutional neural network-based (CNN-based) denoiser into our proposed plug-and-play framework to generate a reflectance component. The final image is enhanced by integrating the illumination and reflectance with gamma correction. The proposed plug-and-play framework can facilitate both post hoc and ad hoc interpretability. Extensive experiments on different datasets demonstrate that our framework outcompetes the state-of-the-art methods in both image enhancement and denoising.
Collapse
|
8
|
Mou E, Wang H, Chen X, Li Z, Cao E, Chen Y, Huang Z, Pang Y. Retinex theory-based nonlinear luminance enhancement and denoising for low-light endoscopic images. BMC Med Imaging 2024; 24:207. [PMID: 39123136 PMCID: PMC11316405 DOI: 10.1186/s12880-024-01386-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2024] [Accepted: 08/01/2024] [Indexed: 08/12/2024] Open
Abstract
BACKGROUND The quality of low-light endoscopic images involves applications in medical disciplines such as physiology and anatomy for the identification and judgement of tissue structures. Due to the use of point light sources and the constraints of narrow physiological structures, medical endoscopic images display uneven brightness, low contrast, and a lack of texture information, presenting diagnostic challenges for physicians. METHODS In this paper, a nonlinear brightness enhancement and denoising network based on Retinex theory is designed to improve the brightness and details of low-light endoscopic images. The nonlinear luminance enhancement module uses higher-order curvilinear functions to improve overall brightness; the dual-attention denoising module captures detailed features of anatomical structures; and the color loss function mitigates color distortion. RESULTS Experimental results on the Endo4IE dataset demonstrate that the proposed method outperforms existing state-of-the-art methods in terms of Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM), and Learned Perceptual Image Patch Similarity (LPIPS). The PSNR is 27.2202, SSIM is 0.8342, and the LPIPS is 0.1492. It provides a method to enhance image quality in clinical diagnosis and treatment. CONCLUSIONS It offers an efficient method to enhance images captured by endoscopes and offers valuable insights into intricate human physiological structures, which can effectively assist clinical diagnosis and treatment.
Collapse
Affiliation(s)
- En Mou
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, 646000, China
| | - Huiqian Wang
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| | - Xiaodong Chen
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, 646000, China
| | - Zhangyong Li
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Enling Cao
- School of Software Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
| | - Yuanyuan Chen
- School of Communications and Information Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China
- College of Physics and Telecommunication Engineering, Zhoukou Normal University, Zhoukou, 466001, China
| | - Zhiwei Huang
- School of Medical Information and Engineering, Southwest Medical University, Luzhou, 646000, China
| | - Yu Pang
- School of Optoelectronic Engineering, Chongqing University of Posts and Telecommunications, Chongqing, 400065, China.
| |
Collapse
|
9
|
Wang X, Huang L, Li M, Han C, Liu X, Nie T. Fast, Zero-Reference Low-Light Image Enhancement with Camera Response Model. SENSORS (BASEL, SWITZERLAND) 2024; 24:5019. [PMID: 39124066 PMCID: PMC11314879 DOI: 10.3390/s24155019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 07/26/2024] [Accepted: 08/01/2024] [Indexed: 08/12/2024]
Abstract
Low-light images are prevalent in intelligent monitoring and many other applications, with low brightness hindering further processing. Although low-light image enhancement can reduce the influence of such problems, current methods often involve a complex network structure or many iterations, which are not conducive to their efficiency. This paper proposes a Zero-Reference Camera Response Network using a camera response model to achieve efficient enhancement for arbitrary low-light images. A double-layer parameter-generating network with a streamlined structure is established to extract the exposure ratio K from the radiation map, which is obtained by inverting the input through a camera response function. Then, K is used as the parameter of a brightness transformation function for one transformation on the low-light image to realize enhancement. In addition, a contrast-preserving brightness loss and an edge-preserving smoothness loss are designed without the requirement for references from the dataset. Both can further retain some key information in the inputs to improve precision. The enhancement is simplified and can reach more than twice the speed of similar methods. Extensive experiments on several LLIE datasets and the DARK FACE face detection dataset fully demonstrate our method's advantages, both subjectively and objectively.
Collapse
Affiliation(s)
- Xiaofeng Wang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (X.W.)
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Liang Huang
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (X.W.)
| | - Mingxuan Li
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (X.W.)
| | - Chengshan Han
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (X.W.)
| | - Xin Liu
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (X.W.)
| | - Ting Nie
- Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China; (X.W.)
| |
Collapse
|
10
|
Xie Z, Jia Z, Zhou G, Shi B. Research on the perception method of tiny objects in low-light and wide-field video. Sci Rep 2024; 14:17249. [PMID: 39060459 PMCID: PMC11282217 DOI: 10.1038/s41598-024-68129-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/19/2024] [Indexed: 07/28/2024] Open
Abstract
At present, many trackers exhibit commendable performance in well-illuminated scenarios but overlook target tracking in low-light environments. As night falls, the tracker's accuracy drops dramatically. Challenges such as high image resolution, intricate backgrounds, uneven illumination, and the resemblance between targets and backgrounds in Hawk-Eye surveillance videos make tracking small objects in low-light and wide-field scenarios exceedingly difficult for previous trackers. To address these challenges, this paper introduces an innovative approach by integrating the difference constraint method into the CF (correlation filters) tracker, which generates a change-aware mask using inter-frame difference information. In addition, a dual regression model and inter-frame difference constraint term are introduced to restrict each other for dual filter learning. In this paper, we construct a new benchmark comprising 41 night surveillance sequences captured by Hawk-Eye cameras. Exhaustive experiments are conducted on this benchmark. The results show that the proposed method maintains superior accuracy, surpasses state-of-the-art trackers in this dataset, and achieves a real-time performance of 27 fps on a single CPU, substantially advancing tiny object tracking on Hawk-Eye surveillance videos in low light and in night scenes.
Collapse
Affiliation(s)
- Zhaodong Xie
- The Key Laboratory of Signal Detection and Processing, College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
- Maoming Polytechnic, Maoming, 525000, China
| | - Zhenhong Jia
- The Key Laboratory of Signal Detection and Processing, College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China.
| | - Gang Zhou
- The Key Laboratory of Signal Detection and Processing, College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| | - Baoqiang Shi
- The Key Laboratory of Signal Detection and Processing, College of Information Science and Engineering, Xinjiang University, Urumqi, 830046, China
| |
Collapse
|
11
|
Jia Y, Yu W, Chen G, Zhao L. Nighttime road scene image enhancement based on cycle-consistent generative adversarial network. Sci Rep 2024; 14:14375. [PMID: 38909068 PMCID: PMC11193765 DOI: 10.1038/s41598-024-65270-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Accepted: 06/18/2024] [Indexed: 06/24/2024] Open
Abstract
During nighttime road scenes, images are often affected by contrast distortion, loss of detailed information, and a significant amount of noise. These factors can negatively impact the accuracy of segmentation and object detection in nighttime road scenes. A cycle-consistent generative adversarial network has been proposed to address this issue to improve the quality of nighttime road scene images. The network includes two generative networks with identical structures and two adversarial networks with identical structures. The generative network comprises an encoder network and a corresponding decoder network. A context feature extraction module is designed as the foundational element of the encoder-decoder network to capture more contextual semantic information with different receptive fields. A receptive field residual module is also designed to increase the receptive field in the encoder network.The illumination attention module is inserted between the encoder and decoder to transfer critical features extracted by the encoder to the decoder. The network also includes a multiscale discriminative network to discriminate better whether the image is a real high-quality or generated image. Additionally, an improved loss function is proposed to enhance the efficacy of image enhancement. Compared to state-of-the-art methods, the proposed approach achieves the highest performance in enhancing nighttime images, making them clearer and more natural.
Collapse
Affiliation(s)
- Yanfei Jia
- College of Electrical and Information Engineering, Beihua University, Jilin, 132013, China
| | - Wenshuo Yu
- College of Electrical Engineering, Northeast Electric Power University, Jilin, 132012, China
| | - Guangda Chen
- College of Electrical and Information Engineering, Beihua University, Jilin, 132013, China.
| | - Liquan Zhao
- College of Electrical Engineering, Northeast Electric Power University, Jilin, 132012, China
| |
Collapse
|
12
|
Li T. Restoration of UAV-Based Backlit Images for Geological Mapping of a High-Steep Slope. SENSORS (BASEL, SWITZERLAND) 2024; 24:1586. [PMID: 38475123 DOI: 10.3390/s24051586] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 02/18/2024] [Accepted: 02/28/2024] [Indexed: 03/14/2024]
Abstract
Unmanned aerial vehicle (UAV)-based geological mapping is significant for understanding the geological structure in the high-steep slopes, but the images obtained in these areas are inevitably influenced by the backlit effect because of the undulating terrain and the viewpoint change of the camera mounted on the UAV. To handle this concern, a novel backlit image restoration method is proposed that takes the real-world application into account and addresses the color distortion issue existing in backlit images captured in high-steep slope scenes. Specifically, there are two main steps in the proposed method, which consist of the backlit removal and the color and detail enhancement. The backlit removal first eliminates the backlit effect using the Retinex strategy, and then the color and detail enhancement step improves the image color and sharpness. The author designs extensive comparison experiments from multiple angles and applies the proposed method to different engineering applications. The experimental results show that the proposed method has potential compared to other main-stream methods both in qualitative visual effects and universal quantitative evaluation metrics. The backlit images processed by the proposed method are significantly improved by the process of feature key point matching, which is very conducive to the fine construction of 3D geological models of the high-steep slope.
Collapse
Affiliation(s)
- Tengyue Li
- Key Laboratory of Geophysical Exploration Equipment Ministry of Education of China, Jilin University, 938 West Democracy Street, Changchun 130026, China
- College of Construction Engineering, Jilin University, 938 West Democracy Street, Changchun 130026, China
- Badong National Observation and Research Station of Geohazards, China University of Geosciences, Wuhan 430074, China
| |
Collapse
|
13
|
Tian Z, Qu P, Li J, Sun Y, Li G, Liang Z, Zhang W. A Survey of Deep Learning-Based Low-Light Image Enhancement. SENSORS (BASEL, SWITZERLAND) 2023; 23:7763. [PMID: 37765817 PMCID: PMC10535564 DOI: 10.3390/s23187763] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 08/29/2023] [Accepted: 09/02/2023] [Indexed: 09/29/2023]
Abstract
Images captured under poor lighting conditions often suffer from low brightness, low contrast, color distortion, and noise. The function of low-light image enhancement is to improve the visual effect of such images for subsequent processing. Recently, deep learning has been used more and more widely in image processing with the development of artificial intelligence technology, and we provide a comprehensive review of the field of low-light image enhancement in terms of network structure, training data, and evaluation metrics. In this paper, we systematically introduce low-light image enhancement based on deep learning in four aspects. First, we introduce the related methods of low-light image enhancement based on deep learning. We then describe the low-light image quality evaluation methods, organize the low-light image dataset, and finally compare and analyze the advantages and disadvantages of the related methods and give an outlook on the future development direction.
Collapse
Affiliation(s)
- Zhen Tian
- School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China; (Z.T.); (J.L.); (Y.S.); (G.L.); (W.Z.)
- Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
| | - Peixin Qu
- School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China; (Z.T.); (J.L.); (Y.S.); (G.L.); (W.Z.)
- Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
| | - Jielin Li
- School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China; (Z.T.); (J.L.); (Y.S.); (G.L.); (W.Z.)
- Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
| | - Yukun Sun
- School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China; (Z.T.); (J.L.); (Y.S.); (G.L.); (W.Z.)
- Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
| | - Guohou Li
- School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China; (Z.T.); (J.L.); (Y.S.); (G.L.); (W.Z.)
- Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
| | - Zheng Liang
- School of Internet, Anhui University, Hefei 230039, China;
| | - Weidong Zhang
- School of Information Engineering, Henan Institute of Science and Technology, Xinxiang 453003, China; (Z.T.); (J.L.); (Y.S.); (G.L.); (W.Z.)
- Institute of Computer Applications, Henan Institute of Science and Technology, Xinxiang 453003, China
| |
Collapse
|
14
|
Zheng S, Huang X, Chen J, Lyu Z, Zheng J, Huang J, Gao H, Liu S, Sun L. UR-Net: An Integrated ResUNet and Attention Based Image Enhancement and Classification Network for Stain-Free White Blood Cells. SENSORS (BASEL, SWITZERLAND) 2023; 23:7605. [PMID: 37688058 PMCID: PMC10490639 DOI: 10.3390/s23177605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/08/2023] [Accepted: 08/29/2023] [Indexed: 09/10/2023]
Abstract
The differential count of white blood cells (WBCs) can effectively provide disease information for patients. Existing stained microscopic WBC classification usually requires complex sample-preparation steps, and is easily affected by external conditions such as illumination. In contrast, the inconspicuous nuclei of stain-free WBCs also bring great challenges to WBC classification. As such, image enhancement, as one of the preprocessing methods of image classification, is essential in improving the image qualities of stain-free WBCs. However, traditional or existing convolutional neural network (CNN)-based image enhancement techniques are typically designed as standalone modules aimed at improving the perceptual quality of humans, without considering their impact on advanced computer vision tasks of classification. Therefore, this work proposes a novel model, UR-Net, which consists of an image enhancement network framed by ResUNet with an attention mechanism and a ResNet classification network. The enhancement model is integrated into the classification model for joint training to improve the classification performance for stain-free WBCs. The experimental results demonstrate that compared to the models without image enhancement and previous enhancement and classification models, our proposed model achieved a best classification performance of 83.34% on our stain-free WBC dataset.
Collapse
Affiliation(s)
- Sikai Zheng
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Xiwei Huang
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Jin Chen
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Zefei Lyu
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Jingwen Zheng
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Jiye Huang
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Haijun Gao
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| | - Shan Liu
- Sichuan Provincial Key Laboratory for Human Disease Gene Study, Sichuan Academy of Medical Sciences & Sichuan Provincial People’s Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China;
| | - Lingling Sun
- Ministry of Education Key Laboratory of RF Circuits and Systems, Hangzhou Dianzi University, Hangzhou 310018, China; (S.Z.); (J.C.); (Z.L.); (J.Z.); (J.H.); (L.S.)
| |
Collapse
|
15
|
Yu W, Zhao L, Zhong T. Unsupervised Low-Light Image Enhancement Based on Generative Adversarial Network. ENTROPY (BASEL, SWITZERLAND) 2023; 25:932. [PMID: 37372276 DOI: 10.3390/e25060932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/07/2023] [Accepted: 06/12/2023] [Indexed: 06/29/2023]
Abstract
Low-light image enhancement aims to improve the perceptual quality of images captured under low-light conditions. This paper proposes a novel generative adversarial network to enhance low-light image quality. Firstly, it designs a generator consisting of residual modules with hybrid attention modules and parallel dilated convolution modules. The residual module is designed to prevent gradient explosion during training and to avoid feature information loss. The hybrid attention module is designed to make the network pay more attention to useful features. A parallel dilated convolution module is designed to increase the receptive field and capture multi-scale information. Additionally, a skip connection is utilized to fuse shallow features with deep features to extract more effective features. Secondly, a discriminator is designed to improve the discrimination ability. Finally, an improved loss function is proposed by incorporating pixel loss to effectively recover detailed information. The proposed method demonstrates superior performance in enhancing low-light images compared to seven other methods.
Collapse
Affiliation(s)
- Wenshuo Yu
- Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin 132012, China
| | - Liquan Zhao
- Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin 132012, China
| | - Tie Zhong
- Key Laboratory of Modern Power System Simulation and Control & Renewable Energy Technology, Ministry of Education, Northeast Electric Power University, Jilin 132012, China
| |
Collapse
|
16
|
Lang YZ, Wang YL, Qian YS, Kong XY, Cao Y. Effective method for low-light image enhancement based on the JND and OCTM models. OPTICS EXPRESS 2023; 31:14008-14026. [PMID: 37157274 DOI: 10.1364/oe.485672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Low-light images always suffer from dim overall brightness, low contrast, and low dynamic ranges, thus result in image degradation. In this paper, we propose an effective method for low-light image enhancement based on the just-noticeable-difference (JND) and the optimal contrast-tone mapping (OCTM) models. First, the guided filter decomposes the original images into base and detail images. After this filtering, detail images are processed based on the visual masking model to enhance details effectively. At the same time, the brightness of base images is adjusted based on the JND and OCTM models. Finally, we propose a new method to generate a sequence of artificial images to adjust the brightness of the output, which has a better performance in image detail preservation compared with other single-input algorithms. Experiments have demonstrated that the proposed method not only achieves low-light image enhancement, but also outperforms state-of-the-art methods qualitatively and quantitatively.
Collapse
|
17
|
Leng H, Fang B, Zhou M, Wu B, Mao Q. Low-Light Image Enhancement with Contrast Increase and Illumination Smooth. INT J PATTERN RECOGN 2023; 37. [DOI: 10.1142/s0218001423540034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
In image enhancement, maintaining the texture and attenuating noise are worth discussing. To address these problems, we propose a low-light image enhancement method with contrast increase and illumination smooth. First, we calculate the maximum map and the minimum map of RGB channels, and then we set maximum map as the initial value for illumination and introduce minimum map to smooth illumination. Second, we use the histogram-equalized version of the input image to construct the weight for the illumination map. Third, we propose an optimization problem to obtain the smooth illumination and refined reflectance. Experimental results show that our method can achieve better performance compared to the state-of-the-art methods.
Collapse
Affiliation(s)
- Hongyue Leng
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Fang
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Mingliang Zhou
- College of Computer Science, Chongqing University, Chongqing 400044, P. R. China
| | - Bin Wu
- Aerospace Science and Technology Industry, Microelectronics System Institute Co., Ltd., No. 269, North Section of Hupan Road, Chengdu, Sichuan 610213, P. R. China
| | - Qin Mao
- School of Computer and Information, Qiannan Normal College for Nationalities, Doupengshan Road, Duyun, Guizhou 558000, P. R. China
- Key Laboratory of Complex Systems and Intelligent Optimization of Guizhou Province, Duyun, Guizhou 558000, P. R. China
| |
Collapse
|
18
|
Guo J, Ma J, García-Fernández ÁF, Zhang Y, Liang H. A survey on image enhancement for Low-light images. Heliyon 2023; 9:e14558. [PMID: 37025779 PMCID: PMC10070385 DOI: 10.1016/j.heliyon.2023.e14558] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/22/2023] [Accepted: 03/09/2023] [Indexed: 03/17/2023] Open
Abstract
In real scenes, due to the problems of low light and unsuitable views, the images often exhibit a variety of degradations, such as low contrast, color distortion, and noise. These degradations affect not only visual effects but also computer vision tasks. This paper focuses on the combination of traditional algorithms and machine learning algorithms in the field of image enhancement. The traditional methods, including their principles and improvements, are introduced from three categories: gray level transformation, histogram equalization, and Retinex methods. Machine learning based algorithms are not only divided into end-to-end learning and unpaired learning, but also concluded to decomposition-based learning and fusion based learning based on the applied image processing strategies. Finally, the involved methods are comprehensively compared by multiple image quality assessment methods, including mean square error, natural image quality evaluator, structural similarity, peak signal to noise ratio, etc.
Collapse
Affiliation(s)
- Jiawei Guo
- Department of Computer Science, University of Liverpool, Liverpool, UK
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
| | - Jieming Ma
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
- Corresponding author.
| | - Ángel F. García-Fernández
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK
- ARIES research center, Universidad Antonio de Nebrija, Madrid, Spain
| | - Yungang Zhang
- School of Information Science Yunnan Normal University, Kunming, China
| | - Haining Liang
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
| |
Collapse
|
19
|
Chen L, Liu Y, Li G, Hong J, Li J, Peng J. Double-function enhancement algorithm for low-illumination images based on retinex theory. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:316-325. [PMID: 36821201 DOI: 10.1364/josaa.472785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 12/09/2022] [Indexed: 06/18/2023]
Abstract
In order to solve the problems of noise amplification and excessive enhancement caused by low contrast and uneven illumination in the process of low-illumination image enhancement, a high-quality image enhancement algorithm is proposed in this paper. First, the total-variation model is used to obtain the smoothed V- and S-channel images, and the adaptive gamma transform is used to enhance the details of the smoothed V-channel image. Then, on this basis, the improved multi-scale retinex algorithms based on the logarithmic function and on the hyperbolic tangent function, respectively, are used to obtain different V-channel enhanced images, and the two images are fused according to the local intensity amplitude of the images. Finally, the three-dimensional gamma function is used to correct the fused image, and then adjust the image saturation. A lightness-order-error (LOE) approach is used to measure the naturalness of the enhanced image. The experimental results show that compared with other classical algorithms, the LOE value of the proposed algorithm can be reduced by 79.95% at most. Compared with other state-of-the-art algorithms, the LOE value can be reduced by 53.43% at most. Compared with some algorithms based on deep learning, the LOE value can be reduced by 52.13% at most. The algorithm proposed in this paper can effectively reduce image noise, retain image details, avoid excessive image enhancement, and obtain a better visual effect while ensuring the enhancement effect.
Collapse
|
20
|
Han R, Tang C, Xu M, Lei Z. A Retinex-based variational model for noise suppression and nonuniform illumination correction in corneal confocal microscopy images. Phys Med Biol 2023; 68. [PMID: 36577141 DOI: 10.1088/1361-6560/acaeef] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2022] [Accepted: 12/28/2022] [Indexed: 12/29/2022]
Abstract
Objective.Corneal confocal microscopy (CCM) image analysis is a non-invasivein vivoclinical technique that can quantify corneal nerve fiber damage. However, the acquired CCM images are often accompanied by speckle noise and nonuniform illumination, which seriously affects the analysis and diagnosis of the diseases.Approach.In this paper, first we propose a variational Retinex model for the inhomogeneity correction and noise removal of CCM images. In this model, the Beppo Levi space is introduced to constrain the smoothness of the illumination layer for the first time, and the fractional order differential is adopted as the regularization term to constrain reflectance layer. Then, a denoising regularization term is also constructed with Block Matching 3D (BM3D) to suppress noise. Finally, by adjusting the uneven illumination layer, we obtain the final results. Second, an image quality evaluation metric is proposed to evaluate the illumination uniformity of images objectively.Main results.To demonstrate the effectiveness of our method, the proposed method is tested on 628 low-quality CCM images from the CORN-2 dataset. Extensive experiments show the proposed method outperforms the other four related methods in terms of noise removal and uneven illumination suppression.SignificanceThis demonstrates that the proposed method may be helpful for the diagnostics and analysis of eye diseases.
Collapse
Affiliation(s)
- Rui Han
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Chen Tang
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Min Xu
- School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Zhenkun Lei
- State Key Laboratory of Structural Analysis for Industrial Equipment, Dalian University of Technology, Dalian 116024, People's Republic of China
| |
Collapse
|
21
|
Han R, Tang C, Xu M, Liang B, Wu T, Lei Z. Enhancement method with naturalness preservation and artifact suppression based on an improved Retinex variational model for color retinal images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:155-164. [PMID: 36607085 DOI: 10.1364/josaa.474020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/29/2022] [Indexed: 06/17/2023]
Abstract
Retinal images are widely used for the diagnosis of various diseases. However, low-quality retinal images with uneven illumination, low contrast, or blurring may seriously interfere with diagnosis by ophthalmologists. This study proposes an enhancement method for low-quality retinal color images. In this paper, an improved variational Retinex model for color retinal images is first proposed and applied to each channel of the RGB color space to obtain the illuminance and reflectance layers. Subsequently, the Naka-Rushton equation is introduced to correct the illumination layer, and an enhancement operator is constructed to improve the clarity of the reflectance layer. Finally, the corrected illuminance and enhanced reflectance are recombined. Contrast-limited adaptive histogram equalization is introduced to further improve the clarity and contrast. To demonstrate the effectiveness of the proposed method, this method is tested on 527 images from four publicly available datasets and 40 local clinical images from Tianjin Eye Hospital (China). Experimental results show that the proposed method outperforms the other four enhancement methods and has obvious advantages in naturalness preservation and artifact suppression.
Collapse
|
22
|
Lang YZ, Qian YS, Kong XY, Zhang JZ, Wang YL, Cao Y. Effective enhancement method of low-light-level images based on the guided filter and multi-scale fusion. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1-9. [PMID: 36607069 DOI: 10.1364/josaa.468876] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/27/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
Aiming to solve the problem of low-light-level (LLL) images with dim overall brightness, uneven gray distribution, and low contrast, in this paper, we propose an effective LLL image enhancement method based on the guided filter and multi-scale fusion for contrast enhancement and detail preservation. First, a base image and detail image(s) are obtained by using the guided filter. After this procedure, the base image is processed by a maximum entropy-based Gamma correction to stretch the gray level distribution. Unlike the existing methods, we enhance the detail image(s) based on the guided filter kernel, which reflects the image area information. Finally, a new method is proposed to generate a sequence of artificial images to adjust the brightness of the output, which has a better performance in image detail preservation compared with other single-input algorithms. Experiments show that the proposed method can provide a more significant performance in enhancing contrast, preserving details, and maintaining the natural feeling of the image than the state of the art.
Collapse
|
23
|
Li C, Guo C, Han L, Jiang J, Cheng MM, Gu J, Loy CC. Low-Light Image and Video Enhancement Using Deep Learning: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:9396-9416. [PMID: 34752382 DOI: 10.1109/tpami.2021.3126387] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Low-light image enhancement (LLIE) aims at improving the perception or interpretability of an image captured in an environment with poor illumination. Recent advances in this area are dominated by deep learning-based solutions, where many learning strategies, network structures, loss functions, training data, etc. have been employed. In this paper, we provide a comprehensive survey to cover various aspects ranging from algorithm taxonomy to unsolved open issues. To examine the generalization of existing methods, we propose a low-light image and video dataset, in which the images and videos are taken by different mobile phones' cameras under diverse illumination conditions. Besides, for the first time, we provide a unified online platform that covers many popular LLIE methods, of which the results can be produced through a user-friendly web interface. In addition to qualitative and quantitative evaluation of existing methods on publicly available and our proposed datasets, we also validate their performance in face detection in the dark. This survey together with the proposed dataset and online platform could serve as a reference source for future study and promote the development of this research field. The proposed platform and dataset as well as the collected methods, datasets, and evaluation metrics are publicly available and will be regularly updated. Project page: https://www.mmlab-ntu.com/project/lliv_survey/index.html.
Collapse
|
24
|
Lu H, Gong J, Liu Z, Lan R, Pan X. FDMLNet: A Frequency-Division and Multiscale Learning Network for Enhancing Low-Light Image. SENSORS (BASEL, SWITZERLAND) 2022; 22:8244. [PMID: 36365942 PMCID: PMC9657473 DOI: 10.3390/s22218244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Revised: 10/20/2022] [Accepted: 10/24/2022] [Indexed: 06/16/2023]
Abstract
Low-illumination images exhibit low brightness, blurry details, and color casts, which present us an unnatural visual experience and further have a negative effect on other visual applications. Data-driven approaches show tremendous potential for lighting up the image brightness while preserving its visual naturalness. However, these methods introduce hand-crafted holes and noise enlargement or over/under enhancement and color deviation. For mitigating these challenging issues, this paper presents a frequency division and multiscale learning network named FDMLNet, including two subnets, DetNet and StruNet. This design first applies the guided filter to separate the high and low frequencies of authentic images, then DetNet and StruNet are, respectively, developed to process them, to fully explore their information at different frequencies. In StruNet, a feasible feature extraction module (FFEM), grouped by multiscale learning block (MSL) and a dual-branch channel attention mechanism (DCAM), is injected to promote its multiscale representation ability. In addition, three FFEMs are connected in a new dense connectivity meant to utilize multilevel features. Extensive quantitative and qualitative experiments on public benchmarks demonstrate that our FDMLNet outperforms state-of-the-art approaches benefiting from its stronger multiscale feature expression and extraction ability.
Collapse
|
25
|
Inoue K, Ono N, Hara K. Local Contrast-Based Pixel Ordering for Exact Histogram Specification. J Imaging 2022; 8:jimaging8090247. [PMID: 36135412 PMCID: PMC9501984 DOI: 10.3390/jimaging8090247] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Revised: 08/29/2022] [Accepted: 09/08/2022] [Indexed: 11/24/2022] Open
Abstract
Histogram equalization is one of the basic image processing tasks for contrast enhancement, and its generalized version is histogram specification, which accepts arbitrary shapes of target histograms including uniform distributions for histogram equalization. It is well known that strictly ordered pixels in an image can be voted to any target histogram to achieve exact histogram specification. This paper proposes a method for ordering pixels in an image on the basis of the local contrast of each pixel, where a Gaussian filter without approximation is used to avoid the duplication of pixel values that disturbs the strict pixel ordering. The main idea of the proposed method is that the problem of pixel ordering is divided into small subproblems which can be solved separately, and then the results are merged into one sequence of all ordered pixels. Moreover, the proposed method is extended from grayscale images to color ones in a consistent manner. Experimental results show that the state-of-the-art histogram specification method occasionally produces false patterns, which are alleviated by the proposed method. Those results demonstrate the effectiveness of the proposed method for exact histogram specification.
Collapse
|
26
|
Mai TTN, Lam EY, Lee C. Deep Unrolled Low-Rank Tensor Completion for High Dynamic Range Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5774-5787. [PMID: 36048976 DOI: 10.1109/tip.2022.3201708] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The major challenge in high dynamic range (HDR) imaging for dynamic scenes is suppressing ghosting artifacts caused by large object motions or poor exposures. Whereas recent deep learning-based approaches have shown significant synthesis performance, interpretation and analysis of their behaviors are difficult and their performance is affected by the diversity of training data. In contrast, traditional model-based approaches yield inferior synthesis performance to learning-based algorithms despite their theoretical thoroughness. In this paper, we propose an algorithm unrolling approach to ghost-free HDR image synthesis algorithm that unrolls an iterative low-rank tensor completion algorithm into deep neural networks to take advantage of the merits of both learning- and model-based approaches while overcoming their weaknesses. First, we formulate ghost-free HDR image synthesis as a low-rank tensor completion problem by assuming the low-rank structure of the tensor constructed from low dynamic range (LDR) images and linear dependency among LDR images. We also define two regularization functions to compensate for modeling inaccuracy by extracting hidden model information. Then, we solve the problem efficiently using an iterative optimization algorithm by reformulating it into a series of subproblems. Finally, we unroll the iterative algorithm into a series of blocks corresponding to each iteration, in which the optimization variables are updated by rigorous closed-form solutions and the regularizers are updated by learned deep neural networks. Experimental results on different datasets show that the proposed algorithm provides better HDR image synthesis performance with superior robustness compared with state-of-the-art algorithms, while using significantly fewer training samples.
Collapse
|
27
|
Low-light image enhancement with geometrical sparse representation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04013-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
28
|
Zhuang P, Wu J, Porikli F, Li C. Underwater Image Enhancement With Hyper-Laplacian Reflectance Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5442-5455. [PMID: 35947571 DOI: 10.1109/tip.2022.3196546] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Underwater image enhancement aims at improving the visibility and eliminating color distortions of underwater images degraded by light absorption and scattering in water. Recently, retinex variational models show remarkable capacity of enhancing images by estimating reflectance and illumination in a retinex decomposition course. However, ambiguous details and unnatural color still challenge the performance of retinex variational models on underwater image enhancement. To overcome these limitations, we propose a hyper-laplacian reflectance priors inspired retinex variational model to enhance underwater images. Specifically, the hyper-laplacian reflectance priors are established with the l1/2 -norm penalty on first-order and second-order gradients of the reflectance. Such priors exploit sparsity-promoting and complete-comprehensive reflectance that is used to enhance both salient structures and fine-scale details and recover the naturalness of authentic colors. Besides, the l2 norm is found to be suitable for accurately estimating the illumination. As a result, we turn a complex underwater image enhancement issue into simple subproblems that separately and simultaneously estimate the reflection and the illumination that are harnessed to enhance underwater images in a retinex variational model. We mathematically analyze and solve the optimal solution of each subproblem. In the optimization course, we develop an alternating minimization algorithm that is efficient on element-wise operations and independent of additional prior knowledge of underwater conditions. Extensive experiments demonstrate the superiority of the proposed method in both subjective results and objective assessments over existing methods. The code is available at: https://github.com/zhuangpeixian/HLRP.
Collapse
|
29
|
Li X, Shang J, Song W, Chen J, Zhang G, Pan J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:6126. [PMID: 36015886 PMCID: PMC9412568 DOI: 10.3390/s22166126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 08/11/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Images captured in a low-light environment are strongly influenced by noise and low contrast, which is detrimental to tasks such as image recognition and object detection. Retinex-based approaches have been continuously explored for low-light enhancement. Nevertheless, Retinex decomposition is a highly ill-posed problem. The estimation of the decomposed components should be combined with proper constraints. Meanwhile, the noise mixed in the low-light image causes unpleasant visual effects. To address these problems, we propose a Constraint Low-Rank Approximation Retinex model (CLAR). In this model, two exponential relative total variation constraints were imposed to ensure that the illumination is piece-wise smooth and that the reflectance component is piece-wise continuous. In addition, the low-rank prior was introduced to suppress the noise in the reflectance component. With a tailored separated alternating direction method of multipliers (ADMM) algorithm, the illumination and reflectance components were updated accurately. Experimental results on several public datasets verify the effectiveness of the proposed model subjectively and objectively.
Collapse
|
30
|
Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Images in real surface defect detection scenes often suffer from uneven illumination. Retinex-based image enhancement methods can effectively eliminate the interference caused by uneven illumination and improve the visual quality of such images. However, these methods suffer from the loss of defect-discriminative information and a high computational burden. To address the above issues, we propose a joint-prior-based uneven illumination enhancement (JPUIE) method. Specifically, a semi-coupled retinex model is first constructed to accurately and effectively eliminate uneven illumination. Furthermore, a multiscale Gaussian-difference-based background prior is proposed to reweight the data consistency term, thereby avoiding the loss of defect information in the enhanced image. Last, by using the powerful nonlinear fitting ability of deep neural networks, a deep denoised prior is proposed to replace existing physics priors, effectively reducing the time consumption. Various experiments are carried out on public and private datasets, which are used to compare the defect images and enhanced results in a symmetric way. The experimental results demonstrate that our method is more conducive to downstream visual inspection tasks than other methods.
Collapse
|
31
|
Ahn S, Shin J, Lim H, Lee J, Paik J. CODEN: combined optimization-based decomposition and learning-based enhancement network for Retinex-based brightness and contrast enhancement. OPTICS EXPRESS 2022; 30:23608-23621. [PMID: 36225037 DOI: 10.1364/oe.459063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 06/04/2022] [Indexed: 06/16/2023]
Abstract
In this paper, we present a novel low-light image enhancement method by combining optimization-based decomposition and enhancement network for simultaneously enhancing brightness and contrast. The proposed method works in two steps including Retinex decomposition and illumination enhancement, and can be trained in an end-to-end manner. The first step separates the low-light image into illumination and reflectance components based on the Retinex model. Specifically, it performs model-based optimization followed by learning for edge-preserved illumination smoothing and detail-preserved reflectance denoising. In the second step, the illumination output from the first step, together with its gamma corrected and histogram equalized versions, serves as input to illumination enhancement network (IEN) including residual squeeze and excitation blocks (RSEBs). Extensive experiments prove that our method shows better performance compared with state-of-the-art low-light enhancement methods in the sense of both objective and subjective measures.
Collapse
|
32
|
Lu Y, Jung SW. Progressive Joint Low-Light Enhancement and Noise Removal for Raw Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2390-2404. [PMID: 35259104 DOI: 10.1109/tip.2022.3155948] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Low-light imaging on mobile devices is typically challenging due to insufficient incident light coming through the relatively small aperture, resulting in low image quality. Most of the previous works on low-light imaging focus either only on a single task such as illumination adjustment, color enhancement, or noise removal; or on a joint illumination adjustment and denoising task that heavily relies on short-long exposure image pairs from specific camera models. These approaches are less practical and generalizable in real-world settings where camera-specific joint enhancement and restoration is required. In this paper, we propose a low-light imaging framework that performs joint illumination adjustment, color enhancement, and denoising to tackle this problem. Considering the difficulty in model-specific data collection and the ultra-high definition of the captured images, we design two branches: a coefficient estimation branch and a joint operation branch. The coefficient estimation branch works in a low-resolution space and predicts the coefficients for enhancement via bilateral learning, whereas the joint operation branch works in a full-resolution space and progressively performs joint enhancement and denoising. In contrast to existing methods, our framework does not need to recollect massive data when adapted to another camera model, which significantly reduces the efforts required to fine-tune our approach for practical usage. Through extensive experiments, we demonstrate its great potential in real-world low-light imaging applications.
Collapse
|
33
|
N2PN: Non-reference two-pathway network for low-light image enhancement. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02627-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
34
|
Han R, Tang C, Xu M, Li J, Lei Z. Joint enhancement and denoising in electronic speckle pattern interferometry fringe patterns with low contrast or uneven illumination via an oriented variational Retinex model. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:239-249. [PMID: 35200960 DOI: 10.1364/josaa.433747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
Simultaneous speckle reduction and contrast enhancement for electronic speckle pattern interferometry (ESPI) fringe patterns is a challenging task. In this paper, we propose a joint enhancement and denoising method based on the oriented variational Retinex model for ESPI fringe patterns with low contrast or uneven illumination. In our model, we use the structure prior to constrain the illumination and introduce a fractional-order differential to constrain the reflectance for enhancement, then use the second-order partial derivative of the reflectance as the denoising term to reduce noise. The proposed model is solved using the sequential method to obtain piecewise smoothed illumination and noise-suppressed reflectance sequentially, which avoids remaining noise in the illumination and reflectance map. After obtaining the refined illuminance and reflectance, we substitute the gamma-corrected illuminance into the camera response function to further adjust the reflectance as the final enhancement result. We test our proposed method on two non-uniform illumination computer-simulated and two low-contrast experimentally obtained ESPI fringe patterns. Finally, we compare our method with three other joint enhancement and denoising variational Retinex methods.
Collapse
|
35
|
Guo L, Jia Z, Yang J, Kasabov NK. Detail Preserving Low Illumination Image and Video Enhancement Algorithm Based on Dark Channel Prior. SENSORS (BASEL, SWITZERLAND) 2021; 22:85. [PMID: 35009629 PMCID: PMC8747644 DOI: 10.3390/s22010085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 12/20/2021] [Accepted: 12/20/2021] [Indexed: 06/14/2023]
Abstract
In low illumination situations, insufficient light in the monitoring device results in poor visibility of effective information, which cannot meet practical applications. To overcome the above problems, a detail preserving low illumination video image enhancement algorithm based on dark channel prior is proposed in this paper. First, a dark channel refinement method is proposed, which is defined by imposing a structure prior to the initial dark channel to improve the image brightness. Second, an anisotropic guided filter (AnisGF) is used to refine the transmission, which preserves the edges of the image. Finally, a detail enhancement algorithm is proposed to avoid the problem of insufficient detail in the initial enhancement image. To avoid video flicker, the next video frames are enhanced based on the brightness of the first enhanced frame. Qualitative and quantitative analysis shows that the proposed algorithm is superior to the contrast algorithm, in which the proposed algorithm ranks first in average gradient, edge intensity, contrast, and patch-based contrast quality index. It can be effectively applied to the enhancement of surveillance video images and for wider computer vision applications.
Collapse
Affiliation(s)
- Lingli Guo
- College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China;
| | - Zhenhong Jia
- College of Information Science and Engineering, Xinjiang University, Urumqi 830046, China;
| | - Jie Yang
- Institute of Image Processing and Pattern Recognition, Shanghai Jiao Tong University, Shanghai 200400, China;
| | - Nikola K. Kasabov
- Knowledge Engineering and Discovery Research Institute, Auckland University of Technology, Auckland 1020, New Zealand;
- Intelligent Systems Research Center, Ulster University Magee Campus, Derry BT48 7JL, UK
| |
Collapse
|
36
|
Enhancement and denoising method for low-quality MRI, CT images via the sequence decomposition Retinex model, and haze removal algorithm. Med Biol Eng Comput 2021; 59:2433-2448. [PMID: 34661856 DOI: 10.1007/s11517-021-02451-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
The visibility and analyzability of MRI and CT images have a great impact on the diagnosis of medical diseases. Therefore, for low-quality MRI and CT images, it is necessary to effectively improve the contrast while suppressing the noise. In this paper, we propose an enhancement and denoising strategy for low-quality medical images based on the sequence decomposition Retinex model and the inverse haze removal approach. To be specific, we first estimate the smoothed illumination and de-noised reflectance in a successive sequence. Then, we apply a color inversion from 0-255 to the estimated illumination, and introduce a haze removal approach based on the dark channel prior to adjust the inverted illumination. Finally, the enhanced image is generated by combining the adjusted illumination and the de-noised reflectance. As a result, improved visibility is obtained from the processed images and inefficient or excessive enhancement is avoided. To verify the reliability of the proposed method, we perform qualitative and quantitative evaluation on five MRI datasets and one CT dataset. Experimental results demonstrate that the proposed method strikes a splendid balance between enhancement and denoising, providing performance superior to that of several state-of-the-art methods.
Collapse
|
37
|
Lecca M, Rizzi A, Serapioni RP. An Image Contrast Measure Based on Retinex Principles. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3543-3554. [PMID: 33667163 DOI: 10.1109/tip.2021.3062724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The image contrast is a feature capturing the variation of the image signal across the space. Such a feature is very useful to describe the local image structure at different scales and thus it is relevant to many computer vision applications, like image/texture retrieval and object recognition. In this work, we present MiRCo, a novel measure of image contrast derived from the Retinex theory. MiRCo is robust against in-plane rotations and light changes at multiple scales. Thanks to these properties, MiRCo enables an accurate and robust description of the local image structure. Here we describe and discuss the mathematical insights of MiRCo also in comparison with other popular contrast measures.
Collapse
|
38
|
Veluchamy M, Bhandari AK, Subramani B. Optimized Bezier Curve Based Intensity Mapping Scheme for Low Light Image Enhancement. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2021. [DOI: 10.1109/tetci.2021.3053253] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
39
|
Huang Z, Tang C, Xu M, Shen Y, Lei Z. Both speckle reduction and contrast enhancement for optical coherence tomography via sequential optimization in the logarithmic domain based on a refined Retinex model. APPLIED OPTICS 2020; 59:11087-11097. [PMID: 33361937 DOI: 10.1364/ao.405981] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Accepted: 11/19/2020] [Indexed: 06/12/2023]
Abstract
Optical coherence tomography (OCT) image enhancement is a challenging task because speckle reduction and contrast enhancement need to be addressed simultaneously and effectively. We present a refined Retinex model for guidance in improving the performance of enhancing OCT images accompanied by speckle noise; a physical explanation is provided. Based on this model, we establish two sequential optimization functions in the logarithmic domain for speckle reduction and contrast enhancement, respectively. More specifically, we obtain the despeckled image of an entire OCT image by solving the first optimization function. Incidentally, we can recover the speckle noise map through removing the despeckle component directly. Then, we estimate the illumination and reflectance by solving the second optimization function. Further, we apply the contrast-limited adaptive histogram equalization algorithm to adjust the illumination, and project it back to the reflectance for achieving contrast enhancement. Experimental results demonstrate the robustness and effectiveness of our proposed method. It performs well in both speckle reduction and contrast enhancement and is superior to the other two methods both in terms of qualitative analysis and quantitative assessment. Our method has the practical potential to improve the accuracy of manual screening and computer-aided diagnosis for retinal diseases.
Collapse
|
40
|
Huang Z, Tang C, Xu M, Lei Z. Joint Retinex-based variational model and CLAHE-in-CIELUV for enhancement of low-quality color retinal images. APPLIED OPTICS 2020; 59:8628-8637. [PMID: 33104544 DOI: 10.1364/ao.401792] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 08/27/2020] [Indexed: 06/11/2023]
Abstract
Poor visual quality of color retinal images greatly interferes with the analysis and diagnosis of the ophthalmologist. In this paper, we propose an enhancement method for low-quality color retinal images based on the combination of the Retinex-based enhancement method and the contrast limited adaptive histogram equalization (CLAHE) algorithm. More specifically, we first estimate the illumination map of the entire image by constructing a Retinex-based variational model. Then, we restore the reflectance map by removing the illumination modified by Gamma correction and directly enable the reflectance as the initial enhancement. To further enhance the clarity and contrast of blood vessels while avoiding color distortion, we apply CLAHE on the luminance channel in CIELUV color space. We collect 60 low-quality color retinal images as our test dataset to verify the reliability of our proposed method. Experimental results show that the proposed method is superior to the other three related methods, both in terms of visual analysis and quantitative evaluation while testing on our dataset. Additionally, we apply the proposed method to four publicly available datasets, and the results show that our methods may be helpful for the detection and analysis of retinopathy.
Collapse
|