1
|
Huang F, Huang W, Wu X. Enhancing Infrared Optical Flow Network Computation through RGB-IR Cross-Modal Image Generation. SENSORS (BASEL, SWITZERLAND) 2024; 24:1615. [PMID: 38475150 DOI: 10.3390/s24051615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 01/22/2024] [Accepted: 02/22/2024] [Indexed: 03/14/2024]
Abstract
Due to the complexity of real optical flow capture, the existing research still has not performed real optical flow capture of infrared (IR) images with the production of an optical flow based on IR images, which makes the research and application of deep learning-based optical flow computation limited to the field of RGB images only. Therefore, in this paper, we propose a method to produce an optical flow dataset of IR images. We utilize the RGB-IR cross-modal image transformation network to rationally transform existing RGB image optical flow datasets. The RGB-IR cross-modal image transformation is based on the improved Pix2Pix implementation, and in the experiments, the network is validated and evaluated using the RGB-IR aligned bimodal dataset M3FD. Then, RGB-IR cross-modal transformation is performed on the existing RGB optical flow dataset KITTI, and the optical flow computation network is trained using the IR images generated by the transformation. Finally, the computational results of the optical flow computation network before and after training are analyzed based on the RGB-IR aligned bimodal data.
Collapse
Affiliation(s)
- Feng Huang
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China
| | - Wei Huang
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China
| | - Xianyu Wu
- School of Mechanical Engineering and Automation, Fuzhou University, Fuzhou 350108, China
| |
Collapse
|
2
|
Wang F, Qian W, Qian Y, Ma C, Zhang H, Wang J, Wan M, Ren K. Maritime Infrared Small Target Detection Based on the Appearance Stable Isotropy Measure in Heavy Sea Clutter Environments. SENSORS (BASEL, SWITZERLAND) 2023; 23:9838. [PMID: 38139684 PMCID: PMC10747984 DOI: 10.3390/s23249838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2023] [Revised: 11/29/2023] [Accepted: 12/11/2023] [Indexed: 12/24/2023]
Abstract
Infrared small target detection plays a crucial role in maritime security. However, detecting small targets within heavy sea clutter environments remains challenging. Existing methods often fail to deliver satisfactory performance in the presence of substantial clutter interference. This paper analyzes the spatial-temporal appearance characteristics of small targets and sea clutter. Based on this analysis, we propose a novel detection method based on the appearance stable isotropy measure (ASIM). First, the original images are processed using the Top-Hat transformation to obtain the salient regions. Next, a preliminary threshold operation is employed to extract the candidate targets from these salient regions, forming a candidate target array image. Third, to distinguish between small targets and sea clutter, we introduce two characteristics: the gradient histogram equalization measure (GHEM) and the local optical flow consistency measure (LOFCM). GHEM evaluates the isotropy of the candidate targets by examining their gradient histogram equalization, while LOFCM assesses their appearance stability based on local optical flow consistency. To effectively combine the complementary information provided by GHEM and LOFCM, we propose ASIM as a fusion characteristic, which can effectively enhance the real target. Finally, a threshold operation is applied to determine the final targets. Experimental results demonstrate that our proposed method exhibits superior comprehensive performance compared to baseline methods.
Collapse
Affiliation(s)
- Fan Wang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Weixian Qian
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Ye Qian
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Chao Ma
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - He Zhang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Jiajie Wang
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Minjie Wan
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| | - Kan Ren
- School of Electronic and Optical Engineering, Nanjing University of Science and Technology, Nanjing 210094, China; (F.W.)
- Jiangsu Key Laboratory of Spectral Imaging and Intelligent Sense, Nanjing University of Science and Technology, Nanjing 210094, China
| |
Collapse
|
3
|
Chen J, Qiu L, Zhu Z, Sun N, Huang H, Ip WH, Yung KL. An Adaptive Infrared Small-Target-Detection Fusion Algorithm Based on Multiscale Local Gradient Contrast for Remote Sensing. MICROMACHINES 2023; 14:1552. [PMID: 37630088 PMCID: PMC10456515 DOI: 10.3390/mi14081552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 07/26/2023] [Accepted: 07/27/2023] [Indexed: 08/27/2023]
Abstract
Space vehicles such as missiles and aircraft have relatively long tracking distances. Infrared (IR) detectors are used for small target detection. The target presents point target characteristics, which lack contour, shape, and texture information. The high-brightness cloud edge and high noise have an impact on the detection of small targets because of the complex background of the sky and ground environment. Traditional template-based filtering and local contrast-based methods do not distinguish between different complex background environments, and their strategy is to unify small-target template detection or to use absolute contrast differences; so, it is easy to have a high false alarm rate. It is necessary to study the detection and tracking methods in complex backgrounds and low signal-to-clutter ratios (SCRs). We use the complexity difference as a prior condition for detection in the background of thick clouds and ground highlight buildings. Then, we use the spatial domain filtering and improved local contrast joint algorithm to obtain a significant area. We also provide a new definition of gradient uniformity through the improvement of the local gradient method, which could further enhance the target contrast. It is important to distinguish between small targets, highlighted background edges, and noise. Furthermore, the method can be used for parallel computing. Compared with the traditional space filtering algorithm or local contrast algorithm, the flexible fusion strategy can achieve the rapid detection of small targets with a higher signal-to-clutter ratio gain (SCRG) and background suppression factor (BSF).
Collapse
Affiliation(s)
- Juan Chen
- Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 200120, China; (L.Q.); (Z.Z.); (N.S.)
- University of Chinese Academy of Sciences, Beijing 100000, China
| | - Lin Qiu
- Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 200120, China; (L.Q.); (Z.Z.); (N.S.)
- University of Chinese Academy of Sciences, Beijing 100000, China
| | - Zhencai Zhu
- Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 200120, China; (L.Q.); (Z.Z.); (N.S.)
- University of Chinese Academy of Sciences, Beijing 100000, China
| | - Ning Sun
- Innovation Academy for Microsatellites of Chinese Academy of Sciences, Shanghai 200120, China; (L.Q.); (Z.Z.); (N.S.)
| | - Hao Huang
- Hubei Key Lab of Ferro & Piezoelectric Materials and Devices, Faculty of Physics and Electronic Science, Hubei University, Wuhan 430062, China
| | - Wai-Hung Ip
- Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong 100872, China; (W.-H.I.); (K.-L.Y.)
- School of Engineering, University of Saskatechewan, Saskatoon, SK S7K 0C8, Canada
| | - Kai-Leung Yung
- Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong 100872, China; (W.-H.I.); (K.-L.Y.)
| |
Collapse
|