1
|
Wang Y, Jin X, Yang J, Jiang Q, Tang Y, Wang P, Lee SJ. Color multi-focus image fusion based on transfer learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the QABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.
Collapse
Affiliation(s)
- Yun Wang
- School of Software, Yunnan University, Kunming, Yunnan, China
- Engineering Research Center of Cyberspace, Yunnan University, Kunming, China
| | - Xin Jin
- School of Software, Yunnan University, Kunming, Yunnan, China
- Engineering Research Center of Cyberspace, Yunnan University, Kunming, China
| | - Jie Yang
- School of Software, Yunnan University, Kunming, Yunnan, China
- School of Physics and Electronic Science, Normal University, Zunyi, China
| | - Qian Jiang
- School of Software, Yunnan University, Kunming, Yunnan, China
- Engineering Research Center of Cyberspace, Yunnan University, Kunming, China
| | - Yue Tang
- School of Mathematics and Statistics, Yunnan University, Kunming, Yunnan, China
| | - Puming Wang
- School of Software, Yunnan University, Kunming, Yunnan, China
- Engineering Research Center of Cyberspace, Yunnan University, Kunming, China
| | - Shin-Jye Lee
- Institute of Technology Management, National Chiao Tung University, Hsinchu, Taiwan
| |
Collapse
|
2
|
Abstract
In recent years, convolutional neural networks (CNN) have been widely used in image denoising for their high performance. One difficulty in applying the CNN to medical image denoising such as speckle reduction in the optical coherence tomography (OCT) image is that a large amount of high-quality data is required for training, which is an inherent limitation for OCT despeckling. Recently, deep image prior (DIP) networks have been proposed for image restoration without pre-training since the CNN structures have the intrinsic ability to capture the low-level statistics of a single image. However, the DIP has difficulty finding a good balance between maintaining details and suppressing speckle noise. Inspired by DIP, in this paper, a sorted non-local statics which measures the signal autocorrelation in the differences between the constructed image and the input image is proposed for OCT image restoration. By adding the sorted non-local statics as a regularization loss in the DIP learning, more low-level image statistics are captured by CNN networks in the process of OCT image restoration. The experimental results demonstrate the superior performance of the proposed method over other state-of-the-art despeckling methods, in terms of objective metrics and visual quality.
Collapse
|
3
|
Wang K, Zheng M, Wei H, Qi G, Li Y. Multi-Modality Medical Image Fusion Using Convolutional Neural Network and Contrast Pyramid. SENSORS (BASEL, SWITZERLAND) 2020; 20:E2169. [PMID: 32290472 PMCID: PMC7218740 DOI: 10.3390/s20082169] [Citation(s) in RCA: 42] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/07/2020] [Revised: 04/06/2020] [Accepted: 04/08/2020] [Indexed: 12/21/2022]
Abstract
Medical image fusion techniques can fuse medical images from different morphologies to make the medical diagnosis more reliable and accurate, which play an increasingly important role in many clinical applications. To obtain a fused image with high visual quality and clear structure details, this paper proposes a convolutional neural network (CNN) based medical image fusion algorithm. The proposed algorithm uses the trained Siamese convolutional network to fuse the pixel activity information of source images to realize the generation of weight map. Meanwhile, a contrast pyramid is implemented to decompose the source image. According to different spatial frequency bands and a weighted fusion operator, source images are integrated. The results of comparative experiments show that the proposed fusion algorithm can effectively preserve the detailed structure information of source images and achieve good human visual effects.
Collapse
Affiliation(s)
- Kunpeng Wang
- School of Information Engineering, Southwest University of Science and Technology, Mianyang 621010, China;
- Robot Technology Used for Special Environment Key Laboratory of Sichuan Province, Mianyang 621010, China
| | - Mingyao Zheng
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| | - Hongyan Wei
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| | - Guanqiu Qi
- Computer Information Systems Department, State University of New York at Buffalo State, Buffalo, NY 14222, USA;
| | - Yuanyuan Li
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (M.Z.); (H.W.)
| |
Collapse
|
4
|
Qi G, Chang L, Luo Y, Chen Y, Zhu Z, Wang S. A Precise Multi-Exposure Image Fusion Method Based on Low-level Features. SENSORS (BASEL, SWITZERLAND) 2020; 20:E1597. [PMID: 32182986 PMCID: PMC7146174 DOI: 10.3390/s20061597] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/02/2020] [Revised: 03/06/2020] [Accepted: 03/11/2020] [Indexed: 11/25/2022]
Abstract
Multi exposure image fusion (MEF) provides a concise way to generate high-dynamic-range (HDR) images. Although the precise fusion can be achieved by existing MEF methods in different static scenes, the corresponding performance of ghost removal varies in different dynamic scenes. This paper proposes a precise MEF method based on feature patches (FPM) to improve the robustness of ghost removal in a dynamic scene. A reference image is selected by a priori exposure quality first and then used in the structure consistency test to solve the image ghosting issues existing in the dynamic scene MEF. Source images are decomposed into spatial-domain structures by a guided filter. Both the base and detail layer of the decomposed images are fused to achieve the MEF. The structure decomposition of the image patch and the appropriate exposure evaluation are integrated into the proposed solution. Both global and local exposures are optimized to improve the fusion performance. Compared with six existing MEF methods, the proposed FPM not only improves the robustness of ghost removal in a dynamic scene, but also performs well in color saturation, image sharpness, and local detail processing.
Collapse
Affiliation(s)
- Guanqiu Qi
- Computer Information Systems Department, State University of New York at Buffalo State, Buffalo, NY 14222, USA;
| | - Liang Chang
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (L.C.); (Y.L.); (Z.Z.)
| | - Yaqin Luo
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (L.C.); (Y.L.); (Z.Z.)
| | - Yinong Chen
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85287, USA;
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China; (L.C.); (Y.L.); (Z.Z.)
| | - Shujuan Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, Yunnan, China
| |
Collapse
|
5
|
Qi G, Wang H, Haner M, Weng C, Chen S, Zhu Z. Convolutional neural network based detection and judgement of environmental obstacle in vehicle operation. CAAI TRANSACTIONS ON INTELLIGENCE TECHNOLOGY 2019. [DOI: 10.1049/trit.2018.1045] [Citation(s) in RCA: 52] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Guanqiu Qi
- Department of Mathematics & Computer and Information ScienceMansfield University of PennsylvaniaMansfieldPA16933USA
| | - Huan Wang
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Matthew Haner
- Department of Mathematics & Computer and Information ScienceMansfield University of PennsylvaniaMansfieldPA16933USA
| | - Chenjie Weng
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Sixin Chen
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and TelecommunicationsChongqing400065People's Republic of China
| |
Collapse
|
6
|
Li Y, Sun Y, Zheng M, Huang X, Qi G, Hu H, Zhu Z. A Novel Multi-Exposure Image Fusion Method Based on Adaptive Patch Structure. ENTROPY 2018; 20:e20120935. [PMID: 33266659 PMCID: PMC7512522 DOI: 10.3390/e20120935] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Revised: 12/03/2018] [Accepted: 12/03/2018] [Indexed: 12/02/2022]
Abstract
Multi-exposure image fusion methods are often applied to the fusion of low-dynamic images that are taken from the same scene at different exposure levels. The fused images not only contain more color and detailed information, but also demonstrate the same real visual effects as the observation by the human eye. This paper proposes a novel multi-exposure image fusion (MEF) method based on adaptive patch structure. The proposed algorithm combines image cartoon-texture decomposition, image patch structure decomposition, and the structural similarity index to improve the local contrast of the image. Moreover, the proposed method can capture more detailed information of source images and produce more vivid high-dynamic-range (HDR) images. Specifically, image texture entropy values are used to evaluate image local information for adaptive selection of image patch size. The intermediate fused image is obtained by the proposed structure patch decomposition algorithm. Finally, the intermediate fused image is optimized by using the structural similarity index to obtain the final fused HDR image. The results of comparative experiments show that the proposed method can obtain high-quality HDR images with better visual effects and more detailed information.
Collapse
Affiliation(s)
- Yuanyuan Li
- School of Information and Electrical, China University of Mining and Technology, Xuzhou 221116, China
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Yanjing Sun
- School of Information and Electrical, China University of Mining and Technology, Xuzhou 221116, China
- Correspondence:
| | - Mingyao Zheng
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xinghua Huang
- College of Automation, Chongqing University, Chongqing 400044, China
| | - Guanqiu Qi
- Department of Mathematics and Computer Information Science, Mansfield University of Pennsylvania, Mansfield, PA 16933, USA
- School of Computing, Informatics, and Decision Systems Engineering, Arizona State University, Tempe, AZ 85287, USA
| | - Hexu Hu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Zhiqin Zhu
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|