51
|
Bi X, Wang P, Wu T, Zha F, Xu P. Non-uniform illumination underwater image enhancement via events and frame fusion. APPLIED OPTICS 2022; 61:8826-8832. [PMID: 36256018 DOI: 10.1364/ao.463099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2022] [Accepted: 09/09/2022] [Indexed: 06/16/2023]
Abstract
Absorption and scattering by aqueous media can attenuate light and cause underwater optical imagery difficulty. Artificial light sources are usually used to aid deep-sea imaging. Due to the limited dynamic range of standard cameras, artificial light sources often cause underwater images to be underexposed or overexposed. By contrast, event cameras have a high dynamic range and high temporal resolution but cannot provide frames with rich color characteristics. In this paper, we exploit the complementarity of the two types of cameras to propose an efficient yet simple method for image enhancement of uneven underwater illumination, which can generate enhanced images containing better scene details and colors similar to standard frames. Additionally, we create a dataset recorded by the Dynamic and Active-pixel Vision Sensor that includes both event streams and frames, enabling testing of the proposed method and frame-based image enhancement methods. The experimental results conducted on our dataset with qualitative and quantitative measures demonstrate that the proposed method outperforms the compared enhancement algorithms.
Collapse
|
52
|
Ma L, Liu R, Zhang J, Fan X, Luo Z. Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:5666-5680. [PMID: 33929967 DOI: 10.1109/tnnls.2021.3071245] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Enhancing the quality of low-light (LOL) images plays a very important role in many image processing and multimedia applications. In recent years, a variety of deep learning techniques have been developed to address this challenging task. A typical framework is to simultaneously estimate the illumination and reflectance, but they disregard the scene-level contextual information encapsulated in feature spaces, causing many unfavorable outcomes, e.g., details loss, color unsaturation, and artifacts. To address these issues, we develop a new context-sensitive decomposition network (CSDNet) architecture to exploit the scene-level contextual dependencies on spatial scales. More concretely, we build a two-stream estimation mechanism including reflectance and illumination estimation network. We design a novel context-sensitive decomposition connection to bridge the two-stream mechanism by incorporating the physical principle. The spatially varying illumination guidance is further constructed for achieving the edge-aware smoothness property of the illumination component. According to different training patterns, we construct CSDNet (paired supervision) and context-sensitive decomposition generative adversarial network (CSDGAN) (unpaired supervision) to fully evaluate our designed architecture. We test our method on seven testing benchmarks [including massachusetts institute of technology (MIT)-Adobe FiveK, LOL, ExDark, and naturalness preserved enhancement (NPE)] to conduct plenty of analytical and evaluated experiments. Thanks to our designed context-sensitive decomposition connection, we successfully realized excellent enhanced results (with sufficient details, vivid colors, and few noises), which fully indicates our superiority against existing state-of-the-art approaches. Finally, considering the practical needs for high efficiency, we develop a lightweight CSDNet (named LiteCSDNet) by reducing the number of channels. Furthermore, by sharing an encoder for these two components, we obtain a more lightweight version (SLiteCSDNet for short). SLiteCSDNet just contains 0.0301M parameters but achieves the almost same performance as CSDNet. Code is available at https://github.com/KarelZhang/CSDNet-CSDGAN.
Collapse
|
53
|
Ye J, Chen X, Qiu C, Zhang Z. Low-Light Image Enhancement Using Photometric Alignment with Hierarchy Pyramid Network. SENSORS (BASEL, SWITZERLAND) 2022; 22:6799. [PMID: 36146148 PMCID: PMC9505311 DOI: 10.3390/s22186799] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/20/2022] [Revised: 09/03/2022] [Accepted: 09/05/2022] [Indexed: 06/16/2023]
Abstract
Low-light image enhancement can effectively assist high-level vision tasks that often fail in poor illumination conditions. Most previous data-driven methods, however, implemented enhancement directly from severely degraded low-light images that may provide undesirable enhancement results, including blurred detail, intensive noise, and distorted color. In this paper, inspired by a coarse-to-fine strategy, we propose an end-to-end image-level alignment with pixel-wise perceptual information enhancement pipeline for low-light image enhancement. A coarse adaptive global photometric alignment sub-network is constructed to reduce style differences, which facilitates improving illumination and revealing under-exposure area information. After the learned aligned image, a hierarchy pyramid enhancement sub-network is used to optimize image quality, which helps to remove amplified noise and enhance the local detail of low-light images. We also propose a multi-residual cascade attention block (MRCAB) that involves channel split and concatenation strategy, polarized self-attention mechanism, which leads to high-resolution reconstruction images in perceptual quality. Extensive experiments have demonstrated the effectiveness of our method on various datasets and significantly outperformed other state-of-the-art methods in detail and color reproduction.
Collapse
|
54
|
Rasheed MT, Shi D. LSR: Lightening super-resolution deep network for low-light image enhancement. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.07.058] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
55
|
Low-light image enhancement with geometrical sparse representation. APPL INTELL 2022. [DOI: 10.1007/s10489-022-04013-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
56
|
Qian Y, Jiang Z, He Y, Zhang S, Jiang S. Multi-scale error feedback network for low-light image enhancement. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07612-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
57
|
Zhuang P, Wu J, Porikli F, Li C. Underwater Image Enhancement With Hyper-Laplacian Reflectance Priors. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:5442-5455. [PMID: 35947571 DOI: 10.1109/tip.2022.3196546] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Underwater image enhancement aims at improving the visibility and eliminating color distortions of underwater images degraded by light absorption and scattering in water. Recently, retinex variational models show remarkable capacity of enhancing images by estimating reflectance and illumination in a retinex decomposition course. However, ambiguous details and unnatural color still challenge the performance of retinex variational models on underwater image enhancement. To overcome these limitations, we propose a hyper-laplacian reflectance priors inspired retinex variational model to enhance underwater images. Specifically, the hyper-laplacian reflectance priors are established with the l1/2 -norm penalty on first-order and second-order gradients of the reflectance. Such priors exploit sparsity-promoting and complete-comprehensive reflectance that is used to enhance both salient structures and fine-scale details and recover the naturalness of authentic colors. Besides, the l2 norm is found to be suitable for accurately estimating the illumination. As a result, we turn a complex underwater image enhancement issue into simple subproblems that separately and simultaneously estimate the reflection and the illumination that are harnessed to enhance underwater images in a retinex variational model. We mathematically analyze and solve the optimal solution of each subproblem. In the optimization course, we develop an alternating minimization algorithm that is efficient on element-wise operations and independent of additional prior knowledge of underwater conditions. Extensive experiments demonstrate the superiority of the proposed method in both subjective results and objective assessments over existing methods. The code is available at: https://github.com/zhuangpeixian/HLRP.
Collapse
|
58
|
Li X, Shang J, Song W, Chen J, Zhang G, Pan J. Low-Light Image Enhancement Based on Constraint Low-Rank Approximation Retinex Model. SENSORS (BASEL, SWITZERLAND) 2022; 22:6126. [PMID: 36015886 PMCID: PMC9412568 DOI: 10.3390/s22166126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Revised: 08/11/2022] [Accepted: 08/12/2022] [Indexed: 06/15/2023]
Abstract
Images captured in a low-light environment are strongly influenced by noise and low contrast, which is detrimental to tasks such as image recognition and object detection. Retinex-based approaches have been continuously explored for low-light enhancement. Nevertheless, Retinex decomposition is a highly ill-posed problem. The estimation of the decomposed components should be combined with proper constraints. Meanwhile, the noise mixed in the low-light image causes unpleasant visual effects. To address these problems, we propose a Constraint Low-Rank Approximation Retinex model (CLAR). In this model, two exponential relative total variation constraints were imposed to ensure that the illumination is piece-wise smooth and that the reflectance component is piece-wise continuous. In addition, the low-rank prior was introduced to suppress the noise in the reflectance component. With a tailored separated alternating direction method of multipliers (ADMM) algorithm, the illumination and reflectance components were updated accurately. Experimental results on several public datasets verify the effectiveness of the proposed model subjectively and objectively.
Collapse
|
59
|
Lin Q, Zheng Z, Jia X. UHD Low-light image enhancement via interpretable bilateral learning. Inf Sci (N Y) 2022. [DOI: 10.1016/j.ins.2022.07.051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
60
|
Wang L, Wu B, Wang X, Zhu Q, Xu K. Endoscopic image luminance enhancement based on the inverse square law for illuminance and retinex. Int J Med Robot 2022; 18:e2396. [PMID: 35318786 DOI: 10.1002/rcs.2396] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 03/20/2022] [Accepted: 03/20/2022] [Indexed: 12/16/2022]
Abstract
BACKGROUND In a single-port robotic system where the 3D endoscope possesses two bending segments, only point light sources can be integrated at the tip due to space limitations. However, point light sources usually provide non-uniform illumination, causing the endoscopic images to appear bright in the centre and dark near the corners. METHODS Based on the inverse square law for illuminance, an initial luminance weighting is first proposed to increase the image luminance uniformity. Then, a saturation-based model is proposed to finalise the luminance weighting to avoid overexposure and colour discrepancy, while the single-scale retinex (SSR) scheme is employed for noise control. RESULTS Via qualitative and quantitative comparisons, the proposed method performs effectively in enhancing the luminance and uniformity of endoscopic images, in terms of both visual perception and objective assessment. CONCLUSIONS The proposed method can effectively reduce the image degradation caused by point light sources.
Collapse
Affiliation(s)
- Longfei Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Baibo Wu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Xiang Wang
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Qingyi Zhu
- Department of Urology, The Second Affiliated Hospital of Nanjing Medical University, Nanjing, China
| | - Kai Xu
- School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| |
Collapse
|
61
|
Liu R, Ma L, Zhang Y, Fan X, Luo Z. Underexposed Image Correction via Hybrid Priors Navigated Deep Propagation. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3425-3436. [PMID: 33513118 DOI: 10.1109/tnnls.2021.3052903] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Enhancing visual quality for underexposed images is an extensively concerning task that plays an important role in various areas of multimedia and computer vision. Most existing methods often fail to generate high-quality results with appropriate luminance and abundant details. To address these issues, we develop a novel framework, integrating both knowledge from physical principles and implicit distributions from data to address underexposed image correction. More concretely, we propose a new perspective to formulate this task as an energy-inspired model with advanced hybrid priors. A propagation procedure navigated by the hybrid priors is well designed for simultaneously propagating the reflectance and illumination toward desired results. We conduct extensive experiments to verify the necessity of integrating both underlying principles (i.e., with knowledge) and distributions (i.e., from data) as navigated deep propagation. Plenty of experimental results of underexposed image correction demonstrate that our proposed method performs favorably against the state-of-the-art methods on both subjective and objective assessments. In addition, we execute the task of face detection to further verify the naturalness and practical value of underexposed image correction. What is more, we apply our method to solve single-image haze removal whose experimental results further demonstrate our superiorities.
Collapse
|
62
|
Li C, Guo C, Loy CC. Learning to Enhance Low-Light Image via Zero-Reference Deep Curve Estimation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:4225-4238. [PMID: 33656989 DOI: 10.1109/tpami.2021.3063604] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This paper presents a novel method, Zero-Reference Deep Curve Estimation (Zero-DCE), which formulates light enhancement as a task of image-specific curve estimation with a deep network. Our method trains a lightweight deep network, DCE-Net, to estimate pixel-wise and high-order curves for dynamic range adjustment of a given image. The curve estimation is specially designed, considering pixel value range, monotonicity, and differentiability. Zero-DCE is appealing in its relaxed assumption on reference images, i.e., it does not require any paired or even unpaired data during training. This is achieved through a set of carefully formulated non-reference loss functions, which implicitly measure the enhancement quality and drive the learning of the network. Despite its simplicity, we show that it generalizes well to diverse lighting conditions. Our method is efficient as image enhancement can be achieved by an intuitive and simple nonlinear curve mapping. We further present an accelerated and light version of Zero-DCE, called Zero-DCE++, that takes advantage of a tiny network with just 10K parameters. Zero-DCE++ has a fast inference speed (1000/11 FPS on a single GPU/CPU for an image of size 1200×900×3) while keeping the enhancement performance of Zero-DCE. Extensive experiments on various benchmarks demonstrate the advantages of our method over state-of-the-art methods qualitatively and quantitatively. Furthermore, the potential benefits of our method to face detection in the dark are discussed. The source code is made publicly available at https://li-chongyi.github.io/Proj_Zero-DCE++.html.
Collapse
|
63
|
Lin YH, Lu YC. Low-Light Enhancement Using a Plug-and-Play Retinex Model With Shrinkage Mapping for Illumination Estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4897-4908. [PMID: 35839183 DOI: 10.1109/tip.2022.3189805] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Low-light photography conditions degrade image quality. This study proposes a novel Retinex-based low-light enhancement method to correctly decompose an input image into reflectance and illumination. Subsequently, we can improve the viewing experience by adjusting the illumination using intensity and contrast enhancement. Because image decomposition is a highly ill-posed problem, constraints must be properly imposed on the optimization framework. To meet the criteria of ideal Retinex decomposition, we design a nonconvex Lp norm and apply shrinkage mapping to the illumination layer. In addition, edge-preserving filters are introduced using the plug-and-play technique to improve illumination. Pixel-wise weights based on variance and image gradients are adopted to suppress noise and preserve details in the reflectance layer. We choose the alternating direction method of multipliers (ADMM) to solve the problem efficiently. Experimental results on several challenging low-light datasets show that our proposed method can more effectively enhance image brightness as compared with state-of-the-art methods. In addition to subjective observations, the proposed method also achieved competitive performance in objective image quality assessments.
Collapse
|
64
|
Joint-Prior-Based Uneven Illumination Image Enhancement for Surface Defect Detection. Symmetry (Basel) 2022. [DOI: 10.3390/sym14071473] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Images in real surface defect detection scenes often suffer from uneven illumination. Retinex-based image enhancement methods can effectively eliminate the interference caused by uneven illumination and improve the visual quality of such images. However, these methods suffer from the loss of defect-discriminative information and a high computational burden. To address the above issues, we propose a joint-prior-based uneven illumination enhancement (JPUIE) method. Specifically, a semi-coupled retinex model is first constructed to accurately and effectively eliminate uneven illumination. Furthermore, a multiscale Gaussian-difference-based background prior is proposed to reweight the data consistency term, thereby avoiding the loss of defect information in the enhanced image. Last, by using the powerful nonlinear fitting ability of deep neural networks, a deep denoised prior is proposed to replace existing physics priors, effectively reducing the time consumption. Various experiments are carried out on public and private datasets, which are used to compare the defect images and enhanced results in a symmetric way. The experimental results demonstrate that our method is more conducive to downstream visual inspection tasks than other methods.
Collapse
|
65
|
Ahn S, Shin J, Lim H, Lee J, Paik J. CODEN: combined optimization-based decomposition and learning-based enhancement network for Retinex-based brightness and contrast enhancement. OPTICS EXPRESS 2022; 30:23608-23621. [PMID: 36225037 DOI: 10.1364/oe.459063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Accepted: 06/04/2022] [Indexed: 06/16/2023]
Abstract
In this paper, we present a novel low-light image enhancement method by combining optimization-based decomposition and enhancement network for simultaneously enhancing brightness and contrast. The proposed method works in two steps including Retinex decomposition and illumination enhancement, and can be trained in an end-to-end manner. The first step separates the low-light image into illumination and reflectance components based on the Retinex model. Specifically, it performs model-based optimization followed by learning for edge-preserved illumination smoothing and detail-preserved reflectance denoising. In the second step, the illumination output from the first step, together with its gamma corrected and histogram equalized versions, serves as input to illumination enhancement network (IEN) including residual squeeze and excitation blocks (RSEBs). Extensive experiments prove that our method shows better performance compared with state-of-the-art low-light enhancement methods in the sense of both objective and subjective measures.
Collapse
|
66
|
Chen Z, Jiang Y, Liu D, Wang Z. CERL: A Unified Optimization Framework for Light Enhancement With Realistic Noise. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; PP:4162-4172. [PMID: 35700251 DOI: 10.1109/tip.2022.3180213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Low-light images captured in the real world are inevitably corrupted by sensor noise. Such noise is spatially variant and highly dependent on the underlying pixel intensity, deviating from the oversimplified assumptions in conventional denoising. Existing light enhancement methods either overlook the important impact of real-world noise during enhancement, or treat noise removal as a separate pre- or post-processing step. We present Coordinated Enhancement for Real-world Low-light Noisy Images (CERL), that seamlessly integrates light enhancement and noise suppression parts into a unified and physics-grounded optimization framework. For the real low-light noise removal part, we customize a self-supervised denoising model that can easily be adapted without referring to clean ground-truth images. For the light enhancement part, we also improve the design of a state-of-the-art backbone. The two parts are then joint formulated into one principled plug-and-play optimization. Our approach is compared against state-of-the-art low-light enhancement methods both qualitatively and quantitatively. Besides standard benchmarks, we further collect and test on a new realistic low-light mobile photography dataset (RLMP), whose mobile-captured photos display heavier realistic noise than those taken by high-quality cameras. CERL consistently produces the most visually pleasing and artifact-free results across all experiments. Our RLMP dataset and codes are available at: https://github.com/VITA-Group/CERL.
Collapse
|
67
|
Low Light Image Enhancement Algorithm Based on Detail Prediction and Attention Mechanism. ENTROPY 2022; 24:e24060815. [PMID: 35741536 PMCID: PMC9222247 DOI: 10.3390/e24060815] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/16/2022] [Revised: 06/04/2022] [Accepted: 06/10/2022] [Indexed: 02/01/2023]
Abstract
Most LLIE algorithms focus solely on enhancing the brightness of the image and ignore the extraction of image details, leading to losing much of the information that reflects the semantics of the image, losing the edges, textures, and shape features, resulting in image distortion. In this paper, the DELLIE algorithm is proposed, an algorithmic framework with deep learning as the central premise that focuses on the extraction and fusion of image detail features. Unlike existing methods, basic enhancement preprocessing is performed first, and then the detail enhancement components are obtained by using the proposed detail component prediction model. Then, the V-channel is decomposed into a reflectance map and an illumination map by proposed decomposition network, where the enhancement component is used to enhance the reflectance map. Then, the S and H channels are nonlinearly constrained using an improved adaptive loss function, while the attention mechanism is introduced into the algorithm proposed in this paper. Finally, the three channels are fused to obtain the final enhancement effect. The experimental results show that, compared with the current mainstream LLIE algorithm, the DELLIE algorithm proposed in this paper can extract and recover the image detail information well while improving the luminance, and the PSNR, SSIM, and NIQE are optimized by 1.85%, 4.00%, and 2.43% on average on recognized datasets.
Collapse
|
68
|
A boundary migration model for imaging within volumetric scattering media. Nat Commun 2022; 13:3234. [PMID: 35680924 PMCID: PMC9184484 DOI: 10.1038/s41467-022-30948-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2021] [Accepted: 05/12/2022] [Indexed: 11/25/2022] Open
Abstract
Effectively imaging within volumetric scattering media is of great importance and challenging especially in macroscopic applications. Recent works have demonstrated the ability to image through scattering media or within the weak volumetric scattering media using spatial distribution or temporal characteristics of the scattered field. Here, we focus on imaging Lambertian objects embedded in highly scattering media, where signal photons are dramatically attenuated during propagation and highly coupled with background photons. We address these challenges by providing a time-to-space boundary migration model (BMM) of the scattered field to convert the scattered measurements in spectral form to the scene information in the temporal domain using all of the optical signals. The experiments are conducted under two typical scattering scenarios: 2D and 3D Lambertian objects embedded in the polyethylene foam and the fog, which demonstrate the effectiveness of the proposed algorithm. It outperforms related works including time gating in terms of reconstruction precision and scattering strength. Even though the proportion of signal photons is only 0.75%, Lambertian objects located at more than 25 transport mean free paths (TMFPs), corresponding to the round-trip scattering length of more than 50 TMFPs, can be reconstructed. Also, the proposed method provides low reconstruction complexity and millisecond-scale runtime, which significantly benefits its application. Imaging in scattering media is challenging due to signal attenuation and strong coupling of scattered and signal photons. The authors present a boundary migration model of the scattered field, converting scattered measurements in spectral form to scene information in temporal domain, and image Lambertian objects in highly scattering media.
Collapse
|
69
|
Huang D, Liu J, Zhou S, Tang W. Deep unsupervised endoscopic image enhancement based on multi-image fusion. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106800. [PMID: 35533420 DOI: 10.1016/j.cmpb.2022.106800] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 02/27/2022] [Accepted: 03/30/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE A deep unsupervised endoscopic image enhancement method is proposed based on multi-image fusion to achieve high quality endoscope images from poorly illuminated, low contrast and color deviated images through an unsupervised mapping and deep learning network without the need for ground truth. METHODS Firstly, three image enhancement methods are used to process original endoscopic images to obtain three derived images, which are then transformed into HSI color space. Secondly, a deep unsupervised multi-image fusion network (DerivedFuse) is proposed to extract and fuse features of the derived images accurately by utilizing a new no-reference quality metric as loss function. I-channel images of the three derived images are inputted into the DerivedFuse network to enhance the intensity component of the original image. Finally, a saturation adjustment function is proposed to adaptive adjusting the saturation component of HSI color space to enrich the color information of the original input image. RESULTS Three evaluation metrics: Entropy, Contrast Improvement Index (CII) and Average Gradient (AG) are used to evaluate the performance of the proposed method. The results are compared with that of fourteen state-of-the-art algorithms. Experiments on endoscopic image enhancement show that the Entropy value of our method is 3.27% higher than the optimal entropy value of comparison algorithms. The CII of our proposed method is 6.19% higher than that of comparison algorithms. The AG of our method is 7.83% higher than the optimal AG of comparison algorithms. CONCLUSIONS The proposed deep unsupervised multi-image fusion method can obtain image information details, enhance endoscopic images with high contrast, rich and natural color information, visual and image quality. Sixteen doctors and medical students have given their assessments on the proposed method for assisting clinical diagnoses.
Collapse
Affiliation(s)
- Dongjin Huang
- Shanghai Film Academy, Shanghai University, Room 304, No.2 Teaching Building, 149 Yanchang Road, Shanghai 200072, China.
| | - Jinhua Liu
- Shanghai Film Academy, Shanghai University, Room 304, No.2 Teaching Building, 149 Yanchang Road, Shanghai 200072, China
| | - Shuhua Zhou
- Shanghai Film Academy, Shanghai University, Room 304, No.2 Teaching Building, 149 Yanchang Road, Shanghai 200072, China
| | - Wen Tang
- The Faculty of Science, Design and Technology, University of Bournemouth, Poole, Dorset, UK
| |
Collapse
|
70
|
Low-Light Image Enhancement Method Based on Retinex Theory by Improving Illumination Map. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12105257] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Recently, low-light image enhancement has attracted much attention. However, some problems still exist. For instance, sometimes dark regions are not fully improved, but bright regions near the light source or auxiliary light source are overexposed. To address these problems, a retinex based method that strengthens the illumination map is proposed, which utilizes a brightness enhancement function (BEF) that is a weighted sum of the Sigmoid function cascading by Gamma correction (GC) and Sine function, and an improved adaptive contrast enhancement (IACE) to enhance the estimated illumination map through multi-scale fusion. Specifically, firstly, the illumination map is obtained according to retinex theory via the weighted sum method, which considers neighborhood information. Then, the Gaussian Laplacian pyramid is used to fuse two input images that are derived by BEF and IACE, so that it can improve brightness and contrast of the illuminance component acquired above. Finally, the adjusted illuminance map is multiplied by the reflection map to obtain the enhanced image according to the retinex theory. Extensive experiments show that our method has better results in subjective vision and quantitative index evaluation compared with other state-of-the-art methods.
Collapse
|
71
|
Liu X, Yang Y, Zhong Y, Xiong D, Huang Z. Super-Pixel Guided Low-Light Images Enhancement with Features Restoration. SENSORS (BASEL, SWITZERLAND) 2022; 22:3667. [PMID: 35632073 PMCID: PMC9147131 DOI: 10.3390/s22103667] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 05/06/2022] [Accepted: 05/10/2022] [Indexed: 06/15/2023]
Abstract
Dealing with low-light images is a challenging problem in the image processing field. A mature low-light enhancement technology will not only be conductive to human visual perception but also lay a solid foundation for the subsequent high-level tasks, such as target detection and image classification. In order to balance the visual effect of the image and the contribution of the subsequent task, this paper proposes utilizing shallow Convolutional Neural Networks (CNNs) as the priori image processing to restore the necessary image feature information, which is followed by super-pixel image segmentation to obtain image regions with similar colors and brightness and, finally, the Attentive Neural Processes (ANPs) network to find its local enhancement function on each super-pixel to further restore features and details. Through extensive experiments on the synthesized low-light image and the real low-light image, the experimental results of our algorithm reach 23.402, 0.920, and 2.2490 for Peak Signal to Noise Ratio (PSNR), Structural Similarity (SSIM), and Natural Image Quality Evaluator (NIQE), respectively. As demonstrated by the experiments on image Scale-Invariant Feature Transform (SIFT) feature detection and subsequent target detection, the results of our approach achieve excellent results in visual effect and image features.
Collapse
|
72
|
Li W, Fan J, Li Y, Hao P, Lin Y, Fu T, Ai D, Song H, Yang J. Endoscopy image enhancement method by generalized imaging defect models based adversarial training. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Accepted: 04/13/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Smoke, uneven lighting, and color deviation are common issues in endoscopic surgery, which have increased the risk of surgery and even lead to failure. Approach. In this study, we present a new physics model driven semi-supervised learning framework for high-quality pixel-wise endoscopic image enhancement, which is generalizable for smoke removal, light adjustment, and color correction. To improve the authenticity of the generated images, and thereby improve the network performance, we integrated specific physical imaging defect models with the CycleGAN framework. No ground-truth data in pairs are required. In addition, we propose a transfer learning framework to address the data scarcity in several endoscope enhancement tasks and improve the network performance. Main results. Qualitative and quantitative studies reveal that the proposed network outperforms the state-of-the-art image enhancement methods. In particular, the proposed method performs much better than the original CycleGAN, for example, the structural similarity improved from 0.7925 to 0.8648, feature similarity for color images from 0.8917 to 0.9283, and quaternion structural similarity from 0.8097 to 0.8800 in the smoke removal task. Experimental results of the proposed transfer learning method also reveal its superior performance when trained with small datasets of target tasks. Significance. Experimental results on endoscopic images prove the effectiveness of the proposed network in smoke removal, light adjustment, and color correction, showing excellent clinical usefulness.
Collapse
|
73
|
Li S, Liu F, Wei J. Dehazing and deblurring of underwater images with heavy-tailed priors. APPLIED OPTICS 2022; 61:3855-3870. [PMID: 36256430 DOI: 10.1364/ao.452345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 03/31/2022] [Indexed: 06/16/2023]
Abstract
The common problems of underwater images include color cast, haze effect, and the motion blur effect caused by turbulence and camera shake. To address these problems, research on color cast and haze and blur effects is carried out in this paper. Because red light has significant attenuation underwater, which could cause color cast of images, this paper proposes a red channel compensation method to solve this problem. This approach adaptively compensates for the red channel according to the pixel value of the red channel, successfully preventing excessive compensation. To address the haze effect of underwater images, combined with the physical model of underwater images, a variational method is introduced in the paper. This method can not only recover clear underwater images, but also refine the transmission map at the same time. Furthermore, the blind deconvolution method is adopted to deblur underwater images. First, the blur kernel of an underwater image is estimated, and then a clear underwater image is recovered based on the obtained blur kernel. Finally, qualitative and quantitative comparisons of the underwater images recovered by different methods are also carried out. From the qualitative perspective, the images recovered by our method have higher image sharpness and more outstanding details. The quantitative comparison results show that the images recovered using our method have higher scores according to various criteria. Therefore, on the whole, our method presents great advantages in comparison with others.
Collapse
|
74
|
DPSF: a Novel Dual-Parametric Sigmoid Function for Optical Coherence Tomography Image Enhancement. Med Biol Eng Comput 2022; 60:1111-1121. [PMID: 35233689 DOI: 10.1007/s11517-022-02538-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Accepted: 02/13/2022] [Indexed: 10/19/2022]
Abstract
Speckle noise reduces the image contrast significantly making the highly scattering structures boundaries difficult to distinguish. This has limited the usage of optical coherence tomography (OCT) images in clinical routine and hindered its potential by depriving clinicians from assessing useful information that are needed in disease monitoring, treatment, progression, and decision making. To overcome this limitation, we propose a fast and robust OCT image enhancement framework using non-linear statistical parametric technique. In the proposed framework, we utilize prior statistical information to model the image to follow Gaussian distribution. After which, a newly designed dual-parametric sigmoid function (DPSF) is utilized to control the dynamic range and contrast level of the image. To balance the intensity range and contrast level, both linear and non-linear normalization operations are performed, then followed by a mapping operation to obtain the enhanced image. Experimentation results on the three OCT vendors show that the proposed method obtained high values in EME, PSNR, SSIM, ρ, and low value in MSE of 36.72, 38.87, 0.87, 0.98, and 25.12 for Cirrus; 40.77, 41.84, 0.89, 0.98, and 22.15 for Spectralis; and 30.81, 32.10, 0.81, 0.96, and 28.55 for Topcon OCT devices, respectively. The proposed DPSF framework performs better than the state-of-the-art methods and improves the interpretability and perception of the OCT images, which can provide clinicians and computer vision program with good quantitative and qualitative information.
Collapse
|
75
|
Xia W, Chen E, Pautler S, Peters T. Laparoscopic image enhancement based on distributed retinex optimization with refined information fusion. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.08.142] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
76
|
Hum YC, Tee YK, Yap WS, Mokayed H, Tan TS, Salim MIM, Lai KW. A contrast enhancement framework under uncontrolled environments based on just noticeable difference. SIGNAL PROCESSING: IMAGE COMMUNICATION 2022; 103:116657. [DOI: 10.1016/j.image.2022.116657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/26/2024]
|
77
|
Lu Y, Jung SW. Progressive Joint Low-Light Enhancement and Noise Removal for Raw Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:2390-2404. [PMID: 35259104 DOI: 10.1109/tip.2022.3155948] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Low-light imaging on mobile devices is typically challenging due to insufficient incident light coming through the relatively small aperture, resulting in low image quality. Most of the previous works on low-light imaging focus either only on a single task such as illumination adjustment, color enhancement, or noise removal; or on a joint illumination adjustment and denoising task that heavily relies on short-long exposure image pairs from specific camera models. These approaches are less practical and generalizable in real-world settings where camera-specific joint enhancement and restoration is required. In this paper, we propose a low-light imaging framework that performs joint illumination adjustment, color enhancement, and denoising to tackle this problem. Considering the difficulty in model-specific data collection and the ultra-high definition of the captured images, we design two branches: a coefficient estimation branch and a joint operation branch. The coefficient estimation branch works in a low-resolution space and predicts the coefficients for enhancement via bilateral learning, whereas the joint operation branch works in a full-resolution space and progressively performs joint enhancement and denoising. In contrast to existing methods, our framework does not need to recollect massive data when adapted to another camera model, which significantly reduces the efforts required to fine-tune our approach for practical usage. Through extensive experiments, we demonstrate its great potential in real-world low-light imaging applications.
Collapse
|
78
|
Cui H, Li J, Hua Z, Fan L. Attention-Guided Multi-Scale Feature Fusion Network for Low-Light Image Enhancement. Front Neurorobot 2022; 16:837208. [PMID: 35308314 PMCID: PMC8927072 DOI: 10.3389/fnbot.2022.837208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Low-light image enhancement has been an important research branch in the field of computer vision. Low-light images are characterized by poor visibility, high noise and low contrast. To improve low-light images generated in low-light environments and night conditions, we propose an Attention-Guided Multi-scale feature fusion network (MSFFNet) for low-light image enhancement for enhancing the contrast and brightness of low-light images. First, to avoid the high cost computation arising from the stacking of multiple sub-networks, our network uses a single encoder and decoder for multi-scale input and output images. Multi-scale input images can make up for the lack of pixel information and loss of feature map information caused by a single input image. The multi-scale output image can effectively monitor the error loss in the image reconstruction process. Second, the Convolutional Block Attention Module (CBAM) is introduced in the encoder part to effectively suppress the noise and color difference generated during feature extraction and further guide the network to refine the color features. Feature calibration module (FCM) is introduced in the decoder section to enhance the mapping expression between channels. Attention fusion module (AFM) is also added to capture contextual information, which is more conducive to recovering image detail information. Last, the cascade fusion module (CFM) is introduced to effectively combine the feature map information under different perceptual fields. Sufficient qualitative and quantitative experiments have been conducted on a variety of publicly available datasets, and the proposed MSFFNet outperforms other low-light enhancement methods in terms of visual effects and metric scores.
Collapse
Affiliation(s)
- HengShuai Cui
- College of Electronic and Communications Engineering, Shandong Technology and Business University, Yantai, China
| | - Jinjiang Li
- College of Electronic and Communications Engineering, Shandong Technology and Business University, Yantai, China
- Institute of Network Technology, Institute of Computing Technology (ICT), Yantai, China
- *Correspondence: Jinjiang Li
| | - Zhen Hua
- College of Electronic and Communications Engineering, Shandong Technology and Business University, Yantai, China
- Institute of Network Technology, Institute of Computing Technology (ICT), Yantai, China
| | - Linwei Fan
- School of Computer Science and Technology, Shandong University of Finance and Economics, Jinan, China
| |
Collapse
|
79
|
LE-GAN: Unsupervised low-light image enhancement network using attention module and identity invariant loss. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.108010] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
80
|
N2PN: Non-reference two-pathway network for low-light image enhancement. APPL INTELL 2022. [DOI: 10.1007/s10489-021-02627-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
81
|
Han R, Tang C, Xu M, Li J, Lei Z. Joint enhancement and denoising in electronic speckle pattern interferometry fringe patterns with low contrast or uneven illumination via an oriented variational Retinex model. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:239-249. [PMID: 35200960 DOI: 10.1364/josaa.433747] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 12/21/2021] [Indexed: 06/14/2023]
Abstract
Simultaneous speckle reduction and contrast enhancement for electronic speckle pattern interferometry (ESPI) fringe patterns is a challenging task. In this paper, we propose a joint enhancement and denoising method based on the oriented variational Retinex model for ESPI fringe patterns with low contrast or uneven illumination. In our model, we use the structure prior to constrain the illumination and introduce a fractional-order differential to constrain the reflectance for enhancement, then use the second-order partial derivative of the reflectance as the denoising term to reduce noise. The proposed model is solved using the sequential method to obtain piecewise smoothed illumination and noise-suppressed reflectance sequentially, which avoids remaining noise in the illumination and reflectance map. After obtaining the refined illuminance and reflectance, we substitute the gamma-corrected illuminance into the camera response function to further adjust the reflectance as the final enhancement result. We test our proposed method on two non-uniform illumination computer-simulated and two low-contrast experimentally obtained ESPI fringe patterns. Finally, we compare our method with three other joint enhancement and denoising variational Retinex methods.
Collapse
|
82
|
Wang W, Wang A, Liu C. Variational Single Nighttime Image Haze Removal With a Gray Haze-Line Prior. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1349-1363. [PMID: 35025742 DOI: 10.1109/tip.2022.3141252] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Influenced by glowing effects, nighttime haze removal is a challenging ill-posed task. Existing nighttime dehazing methods usually result in glowing artifacts, color shifts, overexposure, and noise amplification. Thus, through statistical and theoretical analyses, we propose a simple and effective gray haze-line prior (GHLP) to identify accurate hazy feature areas. This prior demonstrates that haze is concentrated on the haze line in the RGB color space and can be accurately projected into the gray component in the Y channel of the YUV color space. Based on this prior, we establish a new unified nighttime haze removal framework and then decompose a nighttime hazy image into color and gray components in the YUV color space. Glowing color correction and haze removal are two important consecutive steps in the nighttime dehazing process. The glowing color correction method is designed to separately remove glow in the color component and enhance illumination in the gray component. After obtaining a refined nighttime hazy image, we propose a new structure-aware variational framework to simultaneously estimate the inverted scene radiance and the transmission in the gray component. This approach can not only recover the high-quality nighttime scene radiance but also preserve the significant structural information and intrinsic color of the scene. Quantitative and qualitative comparisons validate the excellent effectiveness of the proposed nighttime dehazing method against previous state-of-the-art methods. In addition, the proposed approach can be extended to achieve image enhancement for inclement weather scenes, such as sandstorm scenes and extreme daytime hazy scenes.
Collapse
|
83
|
Huang H, Yang W, Hu Y, Liu J, Duan LY. Towards Low Light Enhancement With RAW Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1391-1405. [PMID: 35038292 DOI: 10.1109/tip.2022.3140610] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
In this paper, we make the first benchmark effort to elaborate on the superiority of using RAW images in the low light enhancement and develop a novel alternative route to utilize RAW images in a more flexible and practical way. Inspired by a full consideration on the typical image processing pipeline, we are inspired to develop a new evaluation framework, Factorized Enhancement Model (FEM), which decomposes the properties of RAW images into measurable factors and provides a tool for exploring how properties of RAW images affect the enhancement performance empirically. The empirical benchmark results show that the Linearity of data and Exposure Time recorded in meta-data play the most critical role, which brings distinct performance gains in various measures over the approaches taking the sRGB images as input. With the insights obtained from the benchmark results in mind, a RAW-guiding Exposure Enhancement Network (REENet) is developed, which makes trade-offs between the advantages and inaccessibility of RAW images in real applications in a way of using RAW images only in the training phase. REENet projects sRGB images into linear RAW domains to apply constraints with corresponding RAW images to reduce the difficulty of modeling training. After that, in the testing phase, our REENet does not rely on RAW images. Experimental results demonstrate not only the superiority of REENet to state-of-the-art sRGB-based methods and but also the effectiveness of the RAW guidance and all components.
Collapse
|
84
|
Liu R, Ma L, Yuan X, Zeng S, Zhang J. Task-Oriented Convex Bilevel Optimization With Latent Feasibility. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:1190-1203. [PMID: 35015638 DOI: 10.1109/tip.2022.3140607] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This paper firstly proposes a convex bilevel optimization paradigm to formulate and optimize popular learning and vision problems in real-world scenarios. Different from conventional approaches, which directly design their iteration schemes based on given problem formulation, we introduce a task-oriented energy as our latent constraint which integrates richer task information. By explicitly re- characterizing the feasibility, we establish an efficient and flexible algorithmic framework to tackle convex models with both shrunken solution space and powerful auxiliary (based on domain knowledge and data distribution of the task). In theory, we present the convergence analysis of our latent feasibility re- characterization based numerical strategy. We also analyze the stability of the theoretical convergence under computational error perturbation. Extensive numerical experiments are conducted to verify our theoretical findings and evaluate the practical performance of our method on different applications.
Collapse
|
85
|
Zhu H, Wang K, Zhang Z, Liu Y, Jiang W. Low-light image enhancement network with decomposition and adaptive information fusion. Neural Comput Appl 2022. [DOI: 10.1007/s00521-021-06836-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
86
|
|
87
|
Low Light Video Enhancement Based on Temporal-Spatial Complementary Feature. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20497-5_30] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
88
|
Low-Light Image Enhancement Under Mixed Noise Model with Tensor Representation. ARTIF INTELL 2022. [DOI: 10.1007/978-3-031-20497-5_48] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
89
|
Enhancement and denoising method for low-quality MRI, CT images via the sequence decomposition Retinex model, and haze removal algorithm. Med Biol Eng Comput 2021; 59:2433-2448. [PMID: 34661856 DOI: 10.1007/s11517-021-02451-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
The visibility and analyzability of MRI and CT images have a great impact on the diagnosis of medical diseases. Therefore, for low-quality MRI and CT images, it is necessary to effectively improve the contrast while suppressing the noise. In this paper, we propose an enhancement and denoising strategy for low-quality medical images based on the sequence decomposition Retinex model and the inverse haze removal approach. To be specific, we first estimate the smoothed illumination and de-noised reflectance in a successive sequence. Then, we apply a color inversion from 0-255 to the estimated illumination, and introduce a haze removal approach based on the dark channel prior to adjust the inverted illumination. Finally, the enhanced image is generated by combining the adjusted illumination and the de-noised reflectance. As a result, improved visibility is obtained from the processed images and inefficient or excessive enhancement is avoided. To verify the reliability of the proposed method, we perform qualitative and quantitative evaluation on five MRI datasets and one CT dataset. Experimental results demonstrate that the proposed method strikes a splendid balance between enhancement and denoising, providing performance superior to that of several state-of-the-art methods.
Collapse
|
90
|
Zhang T, Dong J, Yang L, Liu S, Lu R. Automatic defect inspection of thin film transistor-liquid crystal display panels using robust one-dimensional Fourier reconstruction with non-uniform illumination correction. THE REVIEW OF SCIENTIFIC INSTRUMENTS 2021; 92:103701. [PMID: 34717417 DOI: 10.1063/5.0060636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 09/11/2021] [Indexed: 06/13/2023]
Abstract
Automatic inspection of micro-defects of thin film transistor-liquid crystal display (TFT-LCD) panels is a critical task in LCD manufacturing. To meet the practical demand of online inspection of a one-dimensional (1D) line image captured by the line scan visual system, we propose a robust 1D Fourier reconstruction method with the capability of automatic determination of the period Δx of the periodic pattern of a spatial domain line image and the neighboring length r of the frequency peaks of the corresponding frequency domain line image. Moreover, to alleviate the difficulty in the discrimination between the defects and the non-uniform illumination background, we present an effective way to correct the non-uniform background using robust locally weighted smoothing combined with polynomial curve fitting. As a proof-of-concept, we built a line scan visual system and tested the captured line images. The results reveal that the proposed method is able to correct the non-uniform illumination background in a proper way that does not cause false alarms in defect inspection but also preserves complete information about the defects in terms of the brightness and darkness as well as the shape, indicating its distinct advantage in defect inspection of TFT-LCD panels.
Collapse
Affiliation(s)
- Tengda Zhang
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Jingtao Dong
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Lei Yang
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Shanlin Liu
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| | - Rongsheng Lu
- Anhui Province Key Laboratory of Measuring Theory and Precision Instrument, School of Instrument Science and Optoelectronics Engineering, Hefei University of Technology, Hefei 230009, Anhui, China
| |
Collapse
|
91
|
Hu J, Guo X, Chen J, Liang G, Deng F, Lam TL. A Two-Stage Unsupervised Approach for Low Light Image Enhancement. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3048667] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
92
|
Liu M, Tang L, Zhong S, Luo H, Peng J. Learning noise-decoupled affine models for extreme low-light image enhancement. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.107] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
93
|
Devignetting fundus images via Bayesian estimation of illumination component and gamma correction. Biocybern Biomed Eng 2021. [DOI: 10.1016/j.bbe.2021.06.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
94
|
Liu S, Long W, He L, Li Y, Ding W. Retinex-Based Fast Algorithm for Low-Light Image Enhancement. ENTROPY 2021; 23:e23060746. [PMID: 34199282 PMCID: PMC8231777 DOI: 10.3390/e23060746] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/15/2021] [Revised: 06/07/2021] [Accepted: 06/08/2021] [Indexed: 11/16/2022]
Abstract
We proposed the Retinex-based fast algorithm (RBFA) to achieve low-light image enhancement in this paper, which can restore information that is covered by low illuminance. The proposed algorithm consists of the following parts. Firstly, we convert the low-light image from the RGB (red, green, blue) color space to the HSV (hue, saturation, value) color space and use the linear function to stretch the original gray level dynamic range of the V component. Then, we estimate the illumination image via adaptive gamma correction and use the Retinex model to achieve the brightness enhancement. After that, we further stretch the gray level dynamic range to avoid low image contrast. Finally, we design another mapping function to achieve color saturation correction and convert the enhanced image from the HSV color space to the RGB color space after which we can obtain the clear image. The experimental results show that the enhanced images with the proposed method have better qualitative and quantitative evaluations and lower computational complexity than other state-of-the-art methods.
Collapse
Affiliation(s)
| | | | | | - Yanyan Li
- Correspondence: ; Tel.: +86-15002820593
| | | |
Collapse
|
95
|
Khan R, Yang Y, Liu Q, Shen J, Li B. Deep image enhancement for ill light imaging. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2021; 38:827-839. [PMID: 34143152 DOI: 10.1364/josaa.410316] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Accepted: 03/16/2021] [Indexed: 06/12/2023]
Abstract
Imaging in the natural scene under ill lighting conditions (e.g., low light, back-lit, over-exposed front-lit, and any combinations of them) suffers from both over- and under-exposure at the same time, whereas processing of such images often results in over- and under-enhancement. A single small image sensor can hardly provide satisfactory quality for ill lighting conditions with ordinary optical lenses in capturing devices. Challenges arise in the maintenance of a visual smoothness between those regions, while color and contrast should be well preserved. The problem has been approached by various methods, including multiple sensors and handcrafted parameters, but extant model capacity is limited to only some specific scenes (i.e., lighting conditions). Motivated by these challenges, in this paper, we propose a deep image enhancement method for color images captured under ill lighting conditions. In this method, input images are first decomposed into reflection and illumination maps with the proposed layer distribution loss net, where the illumination blindness and structure degradation problem can be subsequently solved via these two components, respectively. The hidden degradation in reflection and illumination is tuned with a knowledge-based adaptive enhancement constraint designed for ill illuminated images. The model can maintain a balance of smoothness and contribute to solving the problem of noise besides over- and under-enhancement. The local consistency in illumination is achieved via a repairing operation performed in the proposed Repair-Net. The total variation operator is optimized to acquire local consistency, and the image gradient is guided with the proposed enhancement constraint. Finally, a product of updated reflection and illumination maps reconstructs an enhanced image. Experiments are organized under both very low exposure and ill illumination conditions, where a new dataset is also proposed. Results on both experiments show that our method has superior performance in preserving structural and textural details compared to other states of the art, which suggests that our method is more practical in future visual applications.
Collapse
|
96
|
Raveendran S, Patil MD, Birajdar GK. Underwater image enhancement: a comprehensive review, recent trends, challenges and applications. Artif Intell Rev 2021. [DOI: 10.1007/s10462-021-10025-z] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
97
|
|
98
|
Abstract
Due to the characteristics of low signal-to-noise ratio and low contrast, low-light images will have problems such as color distortion, low visibility, and accompanying noise, which will cause the accuracy of the target detection problem to drop or even miss the detection target. However, recalibrating the dataset for this type of image will face problems such as increased cost or reduced model robustness. To solve this kind of problem, we propose a low-light image enhancement model based on deep learning. In this paper, the feature extraction is guided by the illumination map and noise map, and then the neural network is trained to predict the local affine model coefficients in the bilateral space. Through these methods, our network can effectively denoise and enhance images. We have conducted extensive experiments on the LOL datasets, and the results show that, compared with traditional image enhancement algorithms, the model is superior to traditional methods in image quality and speed.
Collapse
|
99
|
Attention Guided Low-Light Image Enhancement with a Large Scale Low-Light Simulation Dataset. Int J Comput Vis 2021. [DOI: 10.1007/s11263-021-01466-8] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
100
|
Ngo D, Lee S, Ngo TM, Lee GD, Kang B. Visibility Restoration: A Systematic Review and Meta-Analysis. SENSORS 2021; 21:s21082625. [PMID: 33918021 PMCID: PMC8069147 DOI: 10.3390/s21082625] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 03/29/2021] [Accepted: 04/06/2021] [Indexed: 11/16/2022]
Abstract
Image acquisition is a complex process that is affected by a wide variety of internal and environmental factors. Hence, visibility restoration is crucial for many high-level applications in photography and computer vision. This paper provides a systematic review and meta-analysis of visibility restoration algorithms with a focus on those that are pertinent to poor weather conditions. This paper starts with an introduction to optical image formation and then provides a comprehensive description of existing algorithms as well as a comparative evaluation. Subsequently, there is a thorough discussion on current difficulties that are worthy of a scientific effort. Moreover, this paper proposes a general framework for visibility restoration in hazy weather conditions while using haze-relevant features and maximum likelihood estimates. Finally, a discussion on the findings and future developments concludes this paper.
Collapse
Affiliation(s)
- Dat Ngo
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
| | - Seungmin Lee
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
| | - Tri Minh Ngo
- Faculty of Electronics and Telecommunication Engineering, The University of Danang—University of Science and Technology, Danang 550000, Vietnam;
| | - Gi-Dong Lee
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
| | - Bongsoon Kang
- Department of Electronics Engineering, Dong-A University, Busan 49315, Korea; (D.N.); (S.L.); (G.-D.L.)
- Correspondence: ; Tel.: +82-51-200-7703
| |
Collapse
|