1
|
Liu X, Hong L, Lin Y. Rapid Fog-Removal Strategies for Traffic Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:7506. [PMID: 37687963 PMCID: PMC10490684 DOI: 10.3390/s23177506] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 08/22/2023] [Accepted: 08/24/2023] [Indexed: 09/10/2023]
Abstract
In a foggy traffic environment, the vision sensor signal of intelligent vehicles will be distorted, the outline of obstacles will become blurred, and the color information in the traffic road will be missing. To solve this problem, four ultra-fast defogging strategies in a traffic environment are proposed for the first time. Through experiments, it is found that the performance of Fast Defogging Strategy 3 is more suitable for fast defogging in a traffic environment. This strategy reduces the original foggy picture by 256 times via bilinear interpolation, and the defogging is processed via the dark channel prior algorithm. Then, the image after fog removal is processed via 4-time upsampling and Gaussian transform. Compared with the original dark channel prior algorithm, the image edge is clearer, and the color information is enhanced. The fast defogging strategy and the original dark channel prior algorithm can reduce the defogging time by 83.93-84.92%. Then, the image after fog removal is inputted into the YOLOv4, YOLOv5, YOLOv6, and YOLOv7 target detection algorithms for detection and verification. It is proven that the image after fog removal can effectively detect vehicles and pedestrians in a complex traffic environment. The experimental results show that the fast defogging strategy is suitable for fast defogging in a traffic environment.
Collapse
Affiliation(s)
| | | | - Yier Lin
- College of Mechanical Engineering, Tianjin University of Science and Technology, Tianjin 300222, China; (X.L.); (L.H.)
| |
Collapse
|
2
|
Guo J, Ma J, García-Fernández ÁF, Zhang Y, Liang H. A survey on image enhancement for Low-light images. Heliyon 2023; 9:e14558. [PMID: 37025779 PMCID: PMC10070385 DOI: 10.1016/j.heliyon.2023.e14558] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 01/22/2023] [Accepted: 03/09/2023] [Indexed: 03/17/2023] Open
Abstract
In real scenes, due to the problems of low light and unsuitable views, the images often exhibit a variety of degradations, such as low contrast, color distortion, and noise. These degradations affect not only visual effects but also computer vision tasks. This paper focuses on the combination of traditional algorithms and machine learning algorithms in the field of image enhancement. The traditional methods, including their principles and improvements, are introduced from three categories: gray level transformation, histogram equalization, and Retinex methods. Machine learning based algorithms are not only divided into end-to-end learning and unpaired learning, but also concluded to decomposition-based learning and fusion based learning based on the applied image processing strategies. Finally, the involved methods are comprehensively compared by multiple image quality assessment methods, including mean square error, natural image quality evaluator, structural similarity, peak signal to noise ratio, etc.
Collapse
Affiliation(s)
- Jiawei Guo
- Department of Computer Science, University of Liverpool, Liverpool, UK
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
| | - Jieming Ma
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
- Corresponding author.
| | - Ángel F. García-Fernández
- Department of Electrical Engineering and Electronics, University of Liverpool, Liverpool, UK
- ARIES research center, Universidad Antonio de Nebrija, Madrid, Spain
| | - Yungang Zhang
- School of Information Science Yunnan Normal University, Kunming, China
| | - Haining Liang
- School of Advanced Technology, Xi'an Jiaotong-Liverpool University (XJTLU), Suzhou, China
| |
Collapse
|
3
|
Fernisha SR, Christopher CS, Lyernisha SR. Slender Swarm Flamingo optimization-based residual low-light image enhancement network. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2022.2161156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Affiliation(s)
- S. R. Fernisha
- Information and Communication Engineering, St. Xaviers Catholic College of Engineering, Nagercoil, India
| | - C. Seldev Christopher
- Computer Science and Engineering, St Xaviers Catholic College of Engineering, Nagercoil, India
| | - S. R. Lyernisha
- Information and Communication Engineering, St. Xaviers Catholic College of Engineering, Nagercoil, India
| |
Collapse
|
4
|
Chen Y, Wan M, Xu Y, Cao X, Zhang X, Chen Q, Gu G. Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:2257-2270. [PMID: 36520746 DOI: 10.1364/josaa.473908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 10/21/2022] [Indexed: 06/17/2023]
Abstract
Infrared and visible image fusion aims to reconstruct fused images with comprehensive visual information by merging the complementary features of source images captured by different imaging sensors. This technology has been widely used in civil and military fields, such as urban security monitoring, remote sensing measurement, and battlefield reconnaissance. However, the existing methods still suffer from the preset fusion strategies that cannot be adjustable to different fusion demands and the loss of information during the feature propagation process, thereby leading to the poor generalization ability and limited fusion performance. Therefore, we propose an unsupervised end-to-end network with learnable fusion strategy for infrared and visible image fusion in this paper. The presented network mainly consists of three parts, including the feature extraction module, the fusion strategy module, and the image reconstruction module. First, in order to preserve more information during the process of feature propagation, dense connections and residual connections are applied to the feature extraction module and the image reconstruction module, respectively. Second, a new convolutional neural network is designed to adaptively learn the fusion strategy, which is able to enhance the generalization ability of our algorithm. Third, due to the lack of ground truth in fusion tasks, a loss function that consists of saliency loss and detail loss is exploited to guide the training direction and balance the retention of different types of information. Finally, the experimental results verify that the proposed algorithm delivers competitive performance when compared with several state-of-the-art algorithms in terms of both subjective and objective evaluations. Our codes are available at https://github.com/MinjieWan/Unsupervised-end-to-end-infrared-and-visible-image-fusion-network-using-learnable-fusion-strategy.
Collapse
|
5
|
Song M, Li R, Guo R, Ding G, Wang Y, Wang J. Single image dehazing algorithm based on optical diffraction deep neural networks. OPTICS EXPRESS 2022; 30:24394-24406. [PMID: 36236995 DOI: 10.1364/oe.458610] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/20/2022] [Indexed: 06/16/2023]
Abstract
Single image dehazing is a challenging task because of the hue and brightness distortion problems due to the atmospheric scattering. These problems limit the perceptual fidelity, as well as information integrity, of a given image. In this paper, we propose an image dehazing method based on the optical neural networks dehazing by simulating optical diffraction. The algorithm is trained from a large number of hazy images and their corresponding clean images. The experimental results demonstrate that the proposed method has reached an advanced level in both PSNR and SSIM dehazing performance indicators, and the amount of calculation is less than most artificial neural networks.
Collapse
|
6
|
Deng X, Dragotti PL. Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2021; 43:3333-3348. [PMID: 32248098 DOI: 10.1109/tpami.2020.2984244] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
In this paper, we propose a novel deep convolutional neural network to solve the general multi-modal image restoration (MIR) and multi-modal image fusion (MIF) problems. Different from other methods based on deep learning, our network architecture is designed by drawing inspirations from a new proposed multi-modal convolutional sparse coding (MCSC) model. The key feature of the proposed network is that it can automatically split the common information shared among different modalities, from the unique information that belongs to each single modality, and is therefore denoted with CU-Net, i.e., common and unique information splitting network. Specifically, the CU-Net is composed of three modules, i.e., the unique feature extraction module (UFEM), common feature preservation module (CFPM), and image reconstruction module (IRM). The architecture of each module is derived from the corresponding part in the MCSC model, which consists of several learned convolutional sparse coding (LCSC) blocks. Extensive numerical results verify the effectiveness of our method on a variety of MIR and MIF tasks, including RGB guided depth image super-resolution, flash guided non-flash image denoising, multi-focus and multi-exposure image fusion.
Collapse
|
7
|
Liu R, Liu J, Jiang Z, Fan X, Luo Z. A Bilevel Integrated Model With Data-Driven Layer Ensemble for Multi-Modality Image Fusion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:1261-1274. [PMID: 33315564 DOI: 10.1109/tip.2020.3043125] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Image fusion plays a critical role in a variety of vision and learning applications. Current fusion approaches are designed to characterize source images, focusing on a certain type of fusion task while limited in a wide scenario. Moreover, other fusion strategies (i.e., weighted averaging, choose-max) cannot undertake the challenging fusion tasks, which furthermore leads to undesirable artifacts facilely emerged in their fused results. In this paper, we propose a generic image fusion method with a bilevel optimization paradigm, targeting on multi-modality image fusion tasks. Corresponding alternation optimization is conducted on certain components decoupled from source images. Via adaptive integration weight maps, we are able to get the flexible fusion strategy across multi-modality images. We successfully applied it to three types of image fusion tasks, including infrared and visible, computed tomography and magnetic resonance imaging, and magnetic resonance imaging and single-photon emission computed tomography image fusion. Results highlight the performance and versatility of our approach from both quantitative and qualitative aspects.
Collapse
|
8
|
Zhang YD, Dong Z, Wang SH, Yu X, Yao X, Zhou Q, Hu H, Li M, Jiménez-Mesa C, Ramirez J, Martinez FJ, Gorriz JM. Advances in multimodal data fusion in neuroimaging: Overview, challenges, and novel orientation. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2020; 64:149-187. [PMID: 32834795 PMCID: PMC7366126 DOI: 10.1016/j.inffus.2020.07.006] [Citation(s) in RCA: 151] [Impact Index Per Article: 30.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Revised: 07/06/2020] [Accepted: 07/14/2020] [Indexed: 05/13/2023]
Abstract
Multimodal fusion in neuroimaging combines data from multiple imaging modalities to overcome the fundamental limitations of individual modalities. Neuroimaging fusion can achieve higher temporal and spatial resolution, enhance contrast, correct imaging distortions, and bridge physiological and cognitive information. In this study, we analyzed over 450 references from PubMed, Google Scholar, IEEE, ScienceDirect, Web of Science, and various sources published from 1978 to 2020. We provide a review that encompasses (1) an overview of current challenges in multimodal fusion (2) the current medical applications of fusion for specific neurological diseases, (3) strengths and limitations of available imaging modalities, (4) fundamental fusion rules, (5) fusion quality assessment methods, and (6) the applications of fusion for atlas-based segmentation and quantification. Overall, multimodal fusion shows significant benefits in clinical diagnosis and neuroscience research. Widespread education and further research amongst engineers, researchers and clinicians will benefit the field of multimodal neuroimaging.
Collapse
Affiliation(s)
- Yu-Dong Zhang
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
| | - Zhengchao Dong
- Department of Psychiatry, Columbia University, USA
- New York State Psychiatric Institute, New York, NY 10032, USA
| | - Shui-Hua Wang
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah 21589, Saudi Arabia
- School of Architecture Building and Civil engineering, Loughborough University, Loughborough, LE11 3TU, UK
- School of Mathematics and Actuarial Science, University of Leicester, LE1 7RH, UK
| | - Xiang Yu
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Xujing Yao
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Qinghua Zhou
- School of Informatics, University of Leicester, Leicester, LE1 7RH, Leicestershire, UK
| | - Hua Hu
- Department of Psychiatry, Columbia University, USA
- Department of Neurology, The Second Affiliated Hospital of Soochow University, China
| | - Min Li
- Department of Psychiatry, Columbia University, USA
- School of Internet of Things, Hohai University, Changzhou, China
| | - Carmen Jiménez-Mesa
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Javier Ramirez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Francisco J Martinez
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
| | - Juan Manuel Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada, Granada, Spain
- Department of Psychiatry, University of Cambridge, Cambridge CB21TN, UK
| |
Collapse
|
9
|
Zhang L, Yin Z, Zhao K, Tian H. Lane detection in dense fog using a polarimetric dehazing method. APPLIED OPTICS 2020; 59:5702-5707. [PMID: 32609693 DOI: 10.1364/ao.391840] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2020] [Accepted: 06/02/2020] [Indexed: 06/11/2023]
Abstract
Lane detection is crucial for driver assistance systems. However, road scenes are severely degraded in dense fog, which leads to the loss of robustness of many lane detection methods. For this problem, an end-to-end method combining polarimetric dehazing and lane detection is proposed in this paper. From images with dense fog captured by a vehicle-mounted monochrome polarization camera, the darkest and brightest images are synthesized. Then, the airlight degree of polarization is estimated from angle of polarization, and the airlight is optimized by guided filtering to facilitate lane detection. After dehazing, the lane detection is carried out by a Canny operator and Hough transform. Having helped achieve good lane detection results in dense fog, the proposed dehazing method is also adaptive and computationally efficient. In general, this paper provides a valuable reference for driving safety in dense fog.
Collapse
|
10
|
Jung H, Kim Y, Jang H, Ha N, Sohn K. Unsupervised Deep Image Fusion with Structure Tensor Representations. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 29:3845-3858. [PMID: 31976896 DOI: 10.1109/tip.2020.2966075] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Convolutional neural networks (CNNs) have facilitated substantial progress on various problems in computer vision and image processing. However, applying them to image fusion has remained challenging due to the lack of the labelled data for supervised learning. This paper introduces a deep image fusion network (DIF-Net), an unsupervised deep learning framework for image fusion. The DIF-Net parameterizes the entire processes of image fusion, comprising of feature extraction, feature fusion, and image reconstruction, using a CNN. The purpose of DIF-Net is to generate an output image which has an identical contrast to high-dimensional input images. To realize this, we propose an unsupervised loss function using the structure tensor representation of the multi-channel image contrasts. Different from traditional fusion methods that involve time-consuming optimization or iterative procedures to obtain the results, our loss function is minimized by a stochastic deep learning solver with large-scale examples. Consequently, the proposed method can produce fused images that preserve source image details through a single forward network trained without reference ground-truth labels. The proposed method has broad applicability to various image fusion problems, including multi-spectral, multi-focus, and multi-exposure image fusions. Quantitative and qualitative evaluations show that the proposed technique outperforms existing state-of-the-art approaches for various applications.
Collapse
|
11
|
Du J, Li W, Tan H. Three-Layer Image Representation by an Enhanced Illumination-Based Image Fusion Method. IEEE J Biomed Health Inform 2019; 24:1169-1179. [PMID: 31352358 DOI: 10.1109/jbhi.2019.2930978] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The recently developed multiscale-based fusion methods can be improved with two approaches: an advanced image decomposition scheme and an advanced fusion rule. In this paper, three-layer image decomposition, enhanced illumination fusion rule-based method is proposed. The proposed method includes three steps. First, each input image is decomposed into its corresponding smooth, texture, and edge layers using defined local extrema and low-pass filters in the spatial domain. Second, three different strategies are applied as fusion rules for the three-layer representation. To preserve the illumination closely related to tumors, the illumination is corrected by applying a higher contrast to the decomposed image details, including the texture and edge inputs, such as those found in grayscale CT and MRI images. The final fused image is created by the addition of the normalized smooth, texture, and edge image layers. The experiments demonstrate that the proposed method performs better than the existing state-of-the-art fusion methods.
Collapse
|
12
|
Yang Y, Wu J, Huang S, Fang Y, Lin P, Que Y. Multimodal Medical Image Fusion Based on Fuzzy Discrimination With Structural Patch Decomposition. IEEE J Biomed Health Inform 2019; 23:1647-1660. [DOI: 10.1109/jbhi.2018.2869096] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
13
|
Wang X, Yin L, Gao M, Wang Z, Shen J, Zou G. Denoising Method for Passive Photon Counting Images Based on Block-Matching 3D Filter and Non-Subsampled Contourlet Transform. SENSORS (BASEL, SWITZERLAND) 2019; 19:E2462. [PMID: 31146456 PMCID: PMC6603648 DOI: 10.3390/s19112462] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2019] [Revised: 05/15/2019] [Accepted: 05/27/2019] [Indexed: 12/22/2022]
Abstract
Multi-pixel photon counting detectors can produce images in low-light environments based on passive photon counting technology. However, the resulting images suffer from problems such as low contrast, low brightness, and some unknown noise distribution. To achieve a better visual effect, this paper describes a denoising and enhancement method based on a block-matching 3D filter and a non-subsampled contourlet transform (NSCT). First, the NSCT was applied to the original image and histogram-equalized image to obtain the sub-band low- and high-frequency coefficients. Regional energy and scale correlation rules were used to determine the respective coefficients. Adaptive single-scale retinex enhancement was applied to the low-frequency components to improve the image quality. The high-frequency sub-bands whose line features were best preserved were selected and processed using a symbol function and the Bayes-shrink threshold. After applying the inverse transform, the fused photon counting image was subjected to an improved block-matching 3D filter, significantly reducing the operation time. The final result from the proposed method was superior to those of comparative methods in terms of several objective evaluation indices and exhibited good visual effects and details from the objective impression.
Collapse
Affiliation(s)
- Xuan Wang
- School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, Shandong, China.
| | - Liju Yin
- School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, Shandong, China.
| | - Mingliang Gao
- School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, Shandong, China.
| | - Zhenzhou Wang
- School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, Shandong, China.
| | - Jin Shen
- School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, Shandong, China.
| | - Guofeng Zou
- School of Electrical and Electronic Engineering, Shandong University of Technology, Zibo 255000, Shandong, China.
| |
Collapse
|
14
|
Saliency Detection Based on the Combination of High-Level Knowledge and Low-Level Cues in Foggy Images. ENTROPY 2019; 21:e21040374. [PMID: 33267088 PMCID: PMC7514858 DOI: 10.3390/e21040374] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 04/01/2019] [Accepted: 04/03/2019] [Indexed: 11/30/2022]
Abstract
A key issue in saliency detection of the foggy images in the wild for human tracking is how to effectively define the less obvious salient objects, and the leading cause is that the contrast and resolution is reduced by the light scattering through fog particles. In this paper, to suppress the interference of the fog and acquire boundaries of salient objects more precisely, we present a novel saliency detection method for human tracking in the wild. In our method, a combination of object contour detection and salient object detection is introduced. The proposed model can not only maintain the object edge more precisely via object contour detection, but also ensure the integrity of salient objects, and finally obtain accurate saliency maps of objects. Firstly, the input image is transformed into HSV color space, and the amplitude spectrum (AS) of each color channel is adjusted to obtain the frequency domain (FD) saliency map. Then, the contrast of the local-global superpixel is calculated, and the saliency map of the spatial domain (SD) is obtained. We use Discrete Stationary Wavelet Transform (DSWT) to fuse the cues of the FD and SD. Finally, a fully convolutional encoder–decoder model is utilized to refine the contour of the salient objects. Experimental results demonstrate that the presented model can remove the influence of fog efficiently, and the performance is better than 16 state-of-the-art saliency models.
Collapse
|
15
|
Li J, Yuan G, Fan H. Multifocus Image Fusion Using Wavelet-Domain-Based Deep CNN. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:4179397. [PMID: 30915109 PMCID: PMC6402241 DOI: 10.1155/2019/4179397] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Revised: 01/05/2019] [Accepted: 01/20/2019] [Indexed: 11/17/2022]
Abstract
Multifocus image fusion is the merging of images of the same scene and having multiple different foci into one all-focus image. Most existing fusion algorithms extract high-frequency information by designing local filters and then adopt different fusion rules to obtain the fused images. In this paper, a wavelet is used for multiscale decomposition of the source and fusion images to obtain high-frequency and low-frequency images. To obtain clearer and complete fusion images, this paper uses a deep convolutional neural network to learn the direct mapping between the high-frequency and low-frequency images of the source and fusion images. In this paper, high-frequency and low-frequency images are used to train two convolutional networks to encode the high-frequency and low-frequency images of the source and fusion images. The experimental results show that the method proposed in this paper can obtain a satisfactory fusion image, which is superior to that obtained by some advanced image fusion algorithms in terms of both visual and objective evaluations.
Collapse
Affiliation(s)
- Jinjiang Li
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
- Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China
| | - Genji Yuan
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
- Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China
| | - Hui Fan
- School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
- Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China
| |
Collapse
|
16
|
Mahmood Z, Muhammad N, Bibi N, Malik YM, Ahmed N. Human visual enhancement using Multi Scale Retinex. INFORMATICS IN MEDICINE UNLOCKED 2018. [DOI: 10.1016/j.imu.2018.09.001] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
|
17
|
Image Enhancement for Surveillance Video of Coal Mining Face Based on Single-Scale Retinex Algorithm Combined with Bilateral Filtering. Symmetry (Basel) 2017. [DOI: 10.3390/sym9060093] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
|
18
|
|
19
|
|
20
|
Banić N, Lončarić S. Smart light random memory sprays Retinex: a fast Retinex implementation for high-quality brightness adjustment and color correction. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2015; 32:2136-2147. [PMID: 26560928 DOI: 10.1364/josaa.32.002136] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Removing the influence of illumination on image colors and adjusting the brightness across the scene are important image enhancement problems. This is achieved by applying adequate color constancy and brightness adjustment methods. One of the earliest models to deal with both of these problems was the Retinex theory. Some of the Retinex implementations tend to give high-quality results by performing local operations, but they are computationally relatively slow. One of the recent Retinex implementations is light random sprays Retinex (LRSR). In this paper, a new method is proposed for brightness adjustment and color correction that overcomes the main disadvantages of LRSR. There are three main contributions of this paper. First, a concept of memory sprays is proposed to reduce the number of LRSR's per-pixel operations to a constant regardless of the parameter values, thereby enabling a fast Retinex-based local image enhancement. Second, an effective remapping of image intensities is proposed that results in significantly higher quality. Third, the problem of LRSR's halo effect is significantly reduced by using an alternative illumination processing method. The proposed method enables a fast Retinex-based image enhancement by processing Retinex paths in a constant number of steps regardless of the path size. Due to the halo effect removal and remapping of the resulting intensities, the method outperforms many of the well-known image enhancement methods in terms of resulting image quality. The results are presented and discussed. It is shown that the proposed method outperforms most of the tested methods in terms of image brightness adjustment, color correction, and computational speed.
Collapse
|
21
|
An Uneven Illumination Correction Algorithm for Optical Remote Sensing Images Covered with Thin Clouds. REMOTE SENSING 2015. [DOI: 10.3390/rs70911848] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
22
|
Luo Y, Guan YP. Structural compensation enhancement method for nonuniform illumination images. APPLIED OPTICS 2015; 54:2929-2938. [PMID: 25967209 DOI: 10.1364/ao.54.002929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2015] [Accepted: 02/20/2015] [Indexed: 06/04/2023]
Abstract
A structural compensation enhancement method is proposed to resolve the issue of nonuniform illumination image enhancement. A logarithmic histogram equalization transformation (LHET) is developed for improving the contrast of image and adjusting the luminance distribution. A structural map of illumination compensation is produced with a local ambient light estimation filter. The enhanced image is obtained by nonlinearly fusing the LHET result, reflection component, and structural map of illumination compensation. Unlike existing techniques, the proposed method has the ability of two-way adjustment for brightness. Furthermore, the proposed method can effectively enhance the nonuniform illumination images with a balance between visibility and naturalness. Extensive experimental comparisons with some state-of-the-art methods have shown the superior performance of the proposed method.
Collapse
|
23
|
|