1
|
Jia Y, Yu W, Zhao L. Generative adversarial networks with texture recovery and physical constraints for remote sensing image dehazing. Sci Rep 2024; 14:31426. [PMID: 39733123 DOI: 10.1038/s41598-024-83088-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2024] [Accepted: 12/11/2024] [Indexed: 12/30/2024] Open
Abstract
The scattering of tiny particles in the atmosphere causes a haze effect on remote sensing images captured by satellites and similar devices, significantly disrupting subsequent image recognition and classification. A generative adversarial network named TRPC-GAN with texture recovery and physical constraints is proposed to mitigate this impact. This network not only effectively removes haze but also better preserves the texture information of the original remote sensing image, thereby enhancing the visual quality of the dehazed image. A multi-scale module is proposed to extract feature information of remote sensing images, allowing it to capture image features from different receptive fields. Simultaneously, an attention module is designed further to guide the network's focus towards important feature information. In addition, a multi-scale adversarial network is proposed to better restore both global and local information about the original image. Introducing a physical constraint loss function to improve the loss function of the original generative adversarial network allows for better preservation of the physical characteristics of remote sensing images. Simulation experiments on synthetic and natural hazy remote sensing image datasets are conducted. The results demonstrate that the dehazing performance of the TRPC-GAN method surpasses the other four methods.
Collapse
Affiliation(s)
- Yanfei Jia
- College of Electrical and Electronic Information Engineering, Beihua University, Jilin, 132013, China
| | - Wenshuo Yu
- Department of Instrument Science and Electrical Engineering, Jilin University, Changchun, 130061, China
| | - Liquan Zhao
- College of Electrical Engineering, Northeast Electric Power University, Jilin, 132012, China.
| |
Collapse
|
2
|
Wang L, Li K, Dong C, Shen K, Mu Y. Dynamic Structure-Aware Modulation Network for Underwater Image Super-Resolution. Biomimetics (Basel) 2024; 9:774. [PMID: 39727778 DOI: 10.3390/biomimetics9120774] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2024] [Revised: 12/12/2024] [Accepted: 12/17/2024] [Indexed: 12/28/2024] Open
Abstract
Image super-resolution (SR) is a formidable challenge due to the intricacies of the underwater environment such as light absorption, scattering, and color distortion. Plenty of deep learning methods have provided a substantial performance boost for SR. Nevertheless, these methods are not only computationally expensive but also often lack flexibility in adapting to severely degraded image statistics. To counteract these issues, we propose a dynamic structure-aware modulation network (DSMN) for efficient and accurate underwater SR. A Mixed Transformer incorporated a structure-aware Transformer block and multi-head Transformer block, which could comprehensively utilize local structural attributes and global features to enhance the details of underwater image restoration. Then, we devised a dynamic information modulation module (DIMM), which adaptively modulated the output of the Mixed Transformer with appropriate weights based on input statistics to highlight important information. Further, a hybrid-attention fusion module (HAFM) adopted spatial and channel interaction to aggregate more delicate features, facilitating high-quality underwater image reconstruction. Extensive experiments on benchmark datasets revealed that our proposed DSMN surpasses the most renowned SR methods regarding quantitative and qualitative metrics, along with less computational effort.
Collapse
Affiliation(s)
- Li Wang
- School of Computer and Software, Nanjing Vocational University of Industry Technology, Nanjing 210023, China
| | - Ke Li
- School of Mechanical and Electrical Engineering, Nanchang Institute of Technology, Nanchang 330044, China
| | - Chengang Dong
- School of Computer and Software, Nanjing Vocational University of Industry Technology, Nanjing 210023, China
- School of Mathematics and Statistics, Huangshan University, Huangshan 245021, China
| | - Keyong Shen
- School of Computer Information and Engineering, Nanchang Institute of Technology, Nanchang 330044, China
| | - Yang Mu
- School of Computer Information and Engineering, Nanchang Institute of Technology, Nanchang 330044, China
| |
Collapse
|
3
|
Zhang Y, Chandler DM, Leszczuk M. Retinex-based underwater image enhancement via adaptive color correction and hierarchical U-shape transformer. OPTICS EXPRESS 2024; 32:24018-24040. [PMID: 39538853 DOI: 10.1364/oe.523951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/25/2024] [Accepted: 06/10/2024] [Indexed: 11/16/2024]
Abstract
Underwater images can suffer from visibility and quality degradation due to the attenuation of propagated light and other factors unique to the underwater setting. While Retinex-based approaches have shown to be effective in enhancing the underwater image quality, the use of hand-crafted priors and optimization-driven solutions often prevent the adaptivity of these methods to different types of underwater images. Moreover, the commonly-used white balance strategy which often appears in the preprocessing stage of the underwater image enhancement (UIE) algorithms may give rise to unwanted color distortions due to the fact that wavelength-dependent light absorption is not taken into account. To overcome these potential limitations, in this paper, we present an effective UIE model based on adaptive color correction and data-driven Retinex decomposition. Specifically, an adaptive color balance approach which takes into account different attenuation levels for light with different wavelengths is proposed to adaptively enhance the three color channels. Furthermore, deep neural networks are employed for the Retinex decomposition, formulating the optimization problem as an implicit-prior-regularized model which is solved by learning the priors from a large training dataset. Finally, a hierarchical U-shape Transformer network which uses hierarchically-structured multi-scale feature extraction and selective feature aggregation is applied to the decomposed images for contrast enhancement and blur reduction. Experimental results tested on six benchmark underwater image datasets demonstrate the effectiveness of the proposed UIE model.
Collapse
|
4
|
Zheng S, Wang R, Chen G, Huang Z, Teng Y, Wang L, Liu Z. Underwater image enhancement using Divide-and-Conquer network. PLoS One 2024; 19:e0294609. [PMID: 38442130 PMCID: PMC10914272 DOI: 10.1371/journal.pone.0294609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 11/05/2023] [Indexed: 03/07/2024] Open
Abstract
Underwater image enhancement has become the requirement for more people to have a better visual experience or to extract information. However, underwater images often suffer from the mixture of color distortion and blurred quality degradation due to the external environment (light attenuation, background noise and the type of water). To solve the above problem, we design a Divide-and-Conquer network (DC-net) for enhancing underwater image, which mainly consists of a texture network, a color network and a refinement network. Specifically, the multi-axis attention block is presented in the texture network, which combine different region/channel features into a single stream structure. And the color network employs an adaptive 3D look-up table method to obtain the color enhanced results. Meanwhile, the refinement network is presented to focus on image features of ground truth. Compared to state-of-the-art (SOTA) underwater image enhance methods, our proposed method can obtain the better visual quality of underwater images and better qualitative and quantitative performance. The code is publicly available at https://github.com/zhengshijian1993/DC-Net.
Collapse
Affiliation(s)
- Shijian Zheng
- Department of Information Engineering, Southwest University of Science and Technology, Mianyang, Sichuan, China
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei, Anhui, China
| | - Rujing Wang
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei, Anhui, China
- Department of Information Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Guo Chen
- Department of Information Engineering, Southwest University of Science and Technology, Mianyang, Sichuan, China
| | - Zhiliang Huang
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei, Anhui, China
- Department of Information Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Yue Teng
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei, Anhui, China
- Department of Information Engineering, University of Science and Technology of China, Hefei, Anhui, China
| | - Liusan Wang
- Institute of Intelligent Machines, Hefei Institutes of Physical Science, Chinese Academy of Science, Hefei, Anhui, China
| | - Zhigui Liu
- Department of Information Engineering, Southwest University of Science and Technology, Mianyang, Sichuan, China
| |
Collapse
|
5
|
Xiao X, Gao X, Hui Y, Jin Z, Zhao H. INAM-Based Image-Adaptive 3D LUTs for Underwater Image Enhancement. SENSORS (BASEL, SWITZERLAND) 2023; 23:2169. [PMID: 36850767 PMCID: PMC9965914 DOI: 10.3390/s23042169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Revised: 01/24/2023] [Accepted: 01/31/2023] [Indexed: 06/18/2023]
Abstract
To the best of our knowledge, applying adaptive three-dimensional lookup tables (3D LUTs) to underwater image enhancement is an unprecedented attempt. It can achieve excellent enhancement results compared to some other methods. However, in the image weight prediction process, the model uses the normalization method of Instance Normalization, which will significantly reduce the standard deviation of the features, thus degrading the performance of the network. To address this issue, we propose an Instance Normalization Adaptive Modulator (INAM) that amplifies the pixel bias by adaptively predicting modulation factors and introduce the INAM into the learning image-adaptive 3D LUTs for underwater image enhancement. The bias amplification strategy in INAM makes the edge information in the features more distinguishable. Therefore, the adaptive 3D LUTs with INAM can substantially improve the performance on underwater image enhancement. Extensive experiments are undertaken to demonstrate the effectiveness of the proposed method.
Collapse
Affiliation(s)
- Xiao Xiao
- State Key Laboratory of CEMEE, Luoyang 471000, China
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
- Science and Technology on Complex System Control and Intelligent Agent Cooperation Laboratory Beijing Electro-Mechanical Engineering Institute, Beijing 100074, China
| | - Xingzhi Gao
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
| | - Yilong Hui
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
| | - Zhiling Jin
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China
| | - Hongyu Zhao
- State Key Laboratory of CEMEE, Luoyang 471000, China
| |
Collapse
|
6
|
Wang N, Chen T, Liu S, Wang R, Karimi HR, Lin Y. Deep Learning-based Visual Detection of Marine Organisms: A Survey. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.02.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/19/2023]
|