1
|
Dinesh C, Cheung G, Bagheri S, Bajic IV. Efficient Signed Graph Sampling via Balancing & Gershgorin Disc Perfect Alignment. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2025; 47:2330-2348. [PMID: 40030838 DOI: 10.1109/tpami.2024.3524180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/05/2025]
Abstract
A basic premise in graph signal processing (GSP) is that a graph encoding pairwise (anti-)correlations of the targeted signal as edge weights is leveraged for graph filtering. Existing fast graph sampling schemes are designed and tested only for positive graphs describing positive correlations. However, there are many real-world datasets exhibiting strong anti-correlations, and thus a suitable model is a signed graph, containing both positive and negative edge weights. In this paper, we propose the first linear-time method for sampling signed graphs, centered on the concept of balanced signed graphs. Specifically, given an empirical covariance data matrix , we first learn a sparse inverse matrix , interpreted as a graph Laplacian corresponding to a signed graph . We approximate with a balanced signed graph via fast edge weight augmentation in linear time, where the eigenpairs of Laplacian for are graph frequencies. Next, we select a node subset for sampling to minimize the error of the signal interpolated from samples in two steps. We first align all Gershgorin disc left-ends of Laplacian at the smallest eigenvalue via similarity transform , leveraging a recent linear algebra theorem called Gershgorin disc perfect alignment (GDPA). We then perform sampling on using a previous fast Gershgorin disc alignment sampling (GDAS) scheme. Experiments show that our signed graph sampling method outperformed fast sampling schemes designed for positive graphs on various datasets with anti-correlations.
Collapse
|
2
|
Wei Y, Li Q, Hou W. Image restoration model for microscopic defocused images based on blurring kernel guidance. Heliyon 2024; 10:e36151. [PMID: 39229525 PMCID: PMC11369444 DOI: 10.1016/j.heliyon.2024.e36151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2023] [Revised: 08/06/2024] [Accepted: 08/10/2024] [Indexed: 09/05/2024] Open
Abstract
Defocus blurring imaging seriously affects the observation accuracy and application range of optical microscopes, and the blurring kernel function is a key parameter for high-resolution image restoration. However, its solving process is complicated and high in computational cost. Image restoration based on most neural networks has high requirements on data sets and the image resolution after restoration is limited because of the lack of quantitative estimation of blurring kernels. In this study, an image restoration method guided by blurring kernel estimation for microscopic defocused images is proposed. First, to reduce the blurring kernel estimation error caused by the positive and negative difference in microscopic defocused imaging, a defocused image classification network is designed to classify the input defocused images with different defocus distances and directions, and its output images are input into the blurring kernel extraction network composed of the feature extraction, correlation, and blurring kernel reconstruction layers. Second, a non-blind defocused image restoration model to restore the high-resolution images is proposed by introducing the blurring kernel extraction module into the restoration network based on U-Net, and the blurring kernel estimation and image restoration losses are jointly trained to realize image restoration guided by blurring kernel estimation. Finally, the experimental results of our proposed method demonstrate significant improvements in both the peak signal-to-noise ratio and structural similarity index measure when compared to other methods.
Collapse
Affiliation(s)
- Yangjie Wei
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, College of Computer Science and Engineering, Northeastern University, Wenhua Street 3, Shenyang, 110819, China
| | - Qifei Li
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, College of Computer Science and Engineering, Northeastern University, Wenhua Street 3, Shenyang, 110819, China
| | - Weihan Hou
- Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, College of Computer Science and Engineering, Northeastern University, Wenhua Street 3, Shenyang, 110819, China
| |
Collapse
|
3
|
Liao W, Subpa-Asa A, Asano Y, Zheng Y, Kajita H, Imanishi N, Yagi T, Aiso S, Kishi K, Sato I. Reliability-Aware Restoration Framework for 4D Spectral Photoacoustic Data. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:15445-15461. [PMID: 37651493 DOI: 10.1109/tpami.2023.3310981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Spectral photoacoustic imaging (PAI) is a new technology that is able to provide 3D geometric structure associated with 1D wavelength-dependent absorption information of the interior of a target in a non-invasive manner. It has potentially broad applications in clinical and medical diagnosis. Unfortunately, the usability of spectral PAI is severely affected by a time-consuming data scanning process and complex noise. Therefore in this study, we propose a reliability-aware restoration framework to recover clean 4D data from incomplete and noisy observations. To the best of our knowledge, this is the first attempt for the 4D spectral PA data restoration problem that solves data completion and denoising simultaneously. We first present a sequence of analyses, including modeling of data reliability in the depth and spectral domains, developing an adaptive correlation graph, and analyzing local patch orientation. On the basis of these analyses, we explore global sparsity and local self-similarity for restoration. We demonstrated the effectiveness of our proposed approach through experiments on real data captured from patients, where our approach outperformed the state-of-the-art methods in both objective evaluation and subjective assessment.
Collapse
|
4
|
Zhang W, Zhu S, Liu L, Bai L, Han J, Guo E. High-throughput imaging through dynamic scattering media based on speckle de-blurring. OPTICS EXPRESS 2023; 31:36503-36520. [PMID: 38017801 DOI: 10.1364/oe.499879] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 10/02/2023] [Indexed: 11/30/2023]
Abstract
Effectively imaging through dynamic scattering media is of great importance and challenge. Some imaging methods based on physical or learning models have been designed for object reconstruction. However, with an increase in exposure time or more drastic changes in the scattering medium, the speckle pattern superimposed during camera integration time undergoes more significant changes, resulting in a modification of the collected speckle structure and increased blurring, which brings significant challenges to the reconstruction. Here, the clearer structural information of blurred speckles is unearthed with a presented speckle de-blurring algorithm, and a high-throughput imaging method through rapidly changing scattering media is proposed for reconstruction under long exposure. For the problem of varying blur degrees in different regions of the speckle, a block-based method is proposed to divide the speckle into distinct sub-speckles, which can realize the reconstruction of hidden objects. The imaging of hidden objects with different complexity through dynamic scattering media is demonstrated, and the reconstruction results are improved significantly for speckles with different blur degrees, which verifies the effectiveness of the method. This method is a high-throughput approach that enables non-invasive imaging solely through the collection of a single speckle. It directly operates on blurred speckles, making it suitable for traditional speckle-correlation methods and deep learning (DL) methods. This provides a new way of thinking about solving practical scattering imaging challenges.
Collapse
|
5
|
Cao S, Chang Y, Xu S, Fang H, Yan L. Nonlinear Deblurring for Low-Light Saturated Image. SENSORS (BASEL, SWITZERLAND) 2023; 23:3784. [PMID: 37112126 PMCID: PMC10146853 DOI: 10.3390/s23083784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/19/2023] [Revised: 03/17/2023] [Accepted: 03/24/2023] [Indexed: 06/19/2023]
Abstract
Single image deblurring has achieved significant progress for natural daytime images. Saturation is a common phenomenon in blurry images, due to the low light conditions and long exposure times. However, conventional linear deblurring methods usually deal with natural blurry images well but result in severe ringing artifacts when recovering low-light saturated blurry images. To solve this problem, we formulate the saturation deblurring problem as a nonlinear model, in which all the saturated and unsaturated pixels are modeled adaptively. Specifically, we additionally introduce a nonlinear function to the convolution operator to accommodate the procedure of the saturation in the presence of the blurring. The proposed method has two advantages over previous methods. On the one hand, the proposed method achieves the same high quality of restoring the natural image as seen in conventional deblurring methods, while also reducing the estimation errors in saturated areas and suppressing ringing artifacts. On the other hand, compared with the recent saturated-based deblurring methods, the proposed method captures the formation of unsaturated and saturated degradations straightforwardly rather than with cumbersome and error-prone detection steps. Note that, this nonlinear degradation model can be naturally formulated into a maximum-a posterioriframework, and can be efficiently decoupled into several solvable sub-problems via the alternating direction method of multipliers (ADMM). Experimental results on both synthetic and real-world images demonstrate that the proposed deblurring algorithm outperforms the state-of-the-art low-light saturation-based deblurring methods.
Collapse
Affiliation(s)
- Shuning Cao
- The National Key Laboratory of Science and Technology on Multispectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (S.C.)
- The Artificial Intelligence Center, Peng Cheng Laboratory, Shenzhen 518055, China
| | - Yi Chang
- The National Key Laboratory of Science and Technology on Multispectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (S.C.)
| | - Shengqi Xu
- The National Key Laboratory of Science and Technology on Multispectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (S.C.)
| | - Houzhang Fang
- The School of Computer Science and Technology, Xidian University, Xi’an 710071, China
| | - Luxin Yan
- The National Key Laboratory of Science and Technology on Multispectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan 430074, China; (S.C.)
| |
Collapse
|
6
|
Chen H, Du B, Luo S, Hu W. Deep Point Set Resampling via Gradient Fields. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2023; 45:2913-2930. [PMID: 35576422 DOI: 10.1109/tpami.2022.3175183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
3D point clouds acquired by scanning real-world objects or scenes have found a wide range of applications including immersive telepresence, autonomous driving, surveillance, etc. They are often perturbed by noise or suffer from low density, which obstructs downstream tasks such as surface reconstruction and understanding. In this paper, we propose a novel paradigm of point set resampling for restoration, which learns continuous gradient fields of point clouds that converge points towards the underlying surface. In particular, we represent a point cloud via its gradient field-the gradient of the log-probability density function, and enforce the gradient field to be continuous, thus guaranteeing the continuity of the model for solvable optimization. Based on the continuous gradient fields estimated via a proposed neural network, resampling a point cloud amounts to performing gradient-based Markov Chain Monte Carlo (MCMC) on the input noisy or sparse point cloud. Further, we propose to introduce regularization into the gradient-based MCMC during point cloud restoration, which essentially refines the intermediate resampled point cloud iteratively and accommodates various priors in the resampling process. Extensive experimental results demonstrate that the proposed point set resampling achieves the state-of-the-art performance in representative restoration tasks including point cloud denoising and upsampling.
Collapse
|
7
|
Xu C, Zhang C, Ma M, Zhang J. Blind image deconvolution via an adaptive weighted TV regularization. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Blind image deconvolution has attracted growing attention in image processing and computer vision. The total variation (TV) regularization can effectively preserve image edges. However, due to lack of self-adaptability, it does not perform very well on restoring images with complex structures. In this paper, we propose a new blind image deconvolution model using an adaptive weighted TV regularization. This model can better handle local features of image. Numerically, we design an effective alternating direction method of multipliers (ADMM) to solve this non-smooth model. Experimental results illustrate the superiority of the proposed method compared with other related blind deconvolution methods.
Collapse
Affiliation(s)
- Chenguang Xu
- Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| | - Chao Zhang
- Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| | - Mingxi Ma
- College of Science, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| | - Jun Zhang
- College of Science, Nanchang Institute of Technology, Nanchang, Jiangxi, China
- Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| |
Collapse
|
8
|
Liu M, Yu Y, Li Y, Ji Z, Chen W, Peng Y. Lightweight MIMO-WNet for single image deblurring. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2022.10.028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
9
|
Zhang X, Cheung G, Pang J, Sanghvi Y, Gnanasambandam A, Chan SH. Graph-Based Depth Denoising & Dequantization for Point Cloud Enhancement. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:6863-6878. [PMID: 36306306 DOI: 10.1109/tip.2022.3214077] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
A 3D point cloud is typically constructed from depth measurements acquired by sensors at one or more viewpoints. The measurements suffer from both quantization and noise corruption. To improve quality, previous works denoise a point cloud a posteriori after projecting the imperfect depth data onto 3D space. Instead, we enhance depth measurements directly on the sensed images a priori, before synthesizing a 3D point cloud. By enhancing near the physical sensing process, we tailor our optimization to our depth formation model before subsequent processing steps that obscure measurement errors. Specifically, we model depth formation as a combined process of signal-dependent noise addition and non-uniform log-based quantization. The designed model is validated (with parameters fitted) using collected empirical data from a representative depth sensor. To enhance each pixel row in a depth image, we first encode intra-view similarities between available row pixels as edge weights via feature graph learning. We next establish inter-view similarities with another rectified depth image via viewpoint mapping and sparse linear interpolation. This leads to a maximum a posteriori (MAP) graph filtering objective that is convex and differentiable. We minimize the objective efficiently using accelerated gradient descent (AGD), where the optimal step size is approximated via Gershgorin circle theorem (GCT). Experiments show that our method significantly outperformed recent point cloud denoising schemes and state-of-the-art image denoising schemes in two established point cloud quality metrics.
Collapse
|
10
|
An integrated imaging sensor for aberration-corrected 3D photography. Nature 2022; 612:62-71. [PMID: 36261533 DOI: 10.1038/s41586-022-05306-8] [Citation(s) in RCA: 33] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 09/01/2022] [Indexed: 11/08/2022]
Abstract
Planar digital image sensors facilitate broad applications in a wide range of areas1-5, and the number of pixels has scaled up rapidly in recent years2,6. However, the practical performance of imaging systems is fundamentally limited by spatially nonuniform optical aberrations originating from imperfect lenses or environmental disturbances7,8. Here we propose an integrated scanning light-field imaging sensor, termed a meta-imaging sensor, to achieve high-speed aberration-corrected three-dimensional photography for universal applications without additional hardware modifications. Instead of directly detecting a two-dimensional intensity projection, the meta-imaging sensor captures extra-fine four-dimensional light-field distributions through a vibrating coded microlens array, enabling flexible and precise synthesis of complex-field-modulated images in post-processing. Using the sensor, we achieve high-performance photography up to a gigapixel with a single spherical lens without a data prior, leading to orders-of-magnitude reductions in system capacity and costs for optical imaging. Even in the presence of dynamic atmosphere turbulence, the meta-imaging sensor enables multisite aberration correction across 1,000 arcseconds on an 80-centimetre ground-based telescope without reducing the acquisition speed, paving the way for high-resolution synoptic sky surveys. Moreover, high-density accurate depth maps can be retrieved simultaneously, facilitating diverse applications from autonomous driving to industrial inspections.
Collapse
|
11
|
Nasonov AV, Nasonova AA. Linear Blur Parameters Estimation Using a Convolutional Neural Network. PATTERN RECOGNITION AND IMAGE ANALYSIS 2022. [DOI: 10.1134/s1054661822030270] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
12
|
CDMC-Net: Context-Aware Image Deblurring Using a Multi-scale Cascaded Network. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-10976-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
|
13
|
Dinesh C, Cheung G, Bajic IV. Point Cloud Video Super-Resolution via Partial Point Coupling and Graph Smoothness. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2022; 31:4117-4132. [PMID: 35696478 DOI: 10.1109/tip.2022.3166644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Point cloud (PC) is a collection of discrete geometric samples of a physical object in 3D space. A PC video consists of temporal frames evenly spaced in time, each containing a static PC at one time instant. PCs in adjacent frames typically do not have point-to-point (P2P) correspondence, and thus exploiting temporal redundancy for PC restoration across frames is difficult. In this paper, we focus on the super-resolution (SR) problem for PC video: increase point density of PCs in video frames while preserving salient geometric features consistently across time. We accomplish this with two ideas. First, we establish partial P2P coupling between PCs of adjacent frames by interpolating interior points in a low-resolution PC patch in frame t and translating them to a corresponding patch in frame t+1 , via a motion model computed by iterative closest point (ICP). Second, we promote piecewise smoothness in 3D geometry in each patch using feature graph Laplacian regularizer (FGLR) in an easily computable quadratic form. The two ideas translate to an unconstrained quadratic programming (QP) problem with a system of linear equations as solution-one where we ensure the numerical stability by upper-bounding the condition number of the coefficient matrix. Finally, to improve the accuracy of the ICP motion model, we re-sample points in a super-resolved patch at time t to better match a low-resolution patch at time t+1 via bipartite graph matching after each SR iteration. Experimental results show temporally consistent super-resolved PC videos generated by our scheme, outperforming SR competitors that optimized on a per-frame basis, in two established PC metrics.
Collapse
|
14
|
Yang Q, Ma Z, Xu Y, Li Z, Sun J. Inferring Point Cloud Quality via Graph Similarity. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:3015-3029. [PMID: 33360982 DOI: 10.1109/tpami.2020.3047083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Objective quality estimation of media content plays a vital role in a wide range of applications. Though numerous metrics exist for 2D images and videos, similar metrics are missing for 3D point clouds with unstructured and non-uniformly distributed points. In this paper, we propose [Formula: see text]-a metric to accurately and quantitatively predict the human perception of point cloud with superimposed geometry and color impairments. Human vision system is more sensitive to the high spatial-frequency components (e.g., contours and edges), and weighs local structural variations more than individual point intensities. Motivated by this fact, we use graph signal gradient as a quality index to evaluate point cloud distortions. Specifically, we first extract geometric keypoints by resampling the reference point cloud geometry information to form an object skeleton. Then, we construct local graphs centered at these keypoints for both reference and distorted point clouds. Next, we compute three moments of color gradients between centered keypoint and all other points in the same local graph for local significance similarity feature. Finally, we obtain similarity index by pooling the local graph significance across all color channels and averaging across all graphs. We evaluate [Formula: see text] on two large and independent point cloud assessment datasets that involve a wide range of impairments (e.g., re-sampling, compression, and additive noise). [Formula: see text] provides state-of-the-art performance for all distortions with noticeable gains in predicting the subjective mean opinion score (MOS) in comparison with point-wise distance-based metrics adopted in standardized reference software. Ablation studies further show that [Formula: see text] can be generalized to various scenarios with consistent performance by adjusting its key modules and parameters. Models and associated materials will be made available at https://njuvision.github.io/GraphSIM or http://smt.sjtu.edu.cn/papers/GraphSIM.
Collapse
|
15
|
Dong W, Du Y, Xu J, Dong F, Ren S. Spatially adaptive blind deconvolution methods for optical coherence tomography. Comput Biol Med 2022; 147:105650. [PMID: 35653849 DOI: 10.1016/j.compbiomed.2022.105650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 05/12/2022] [Accepted: 05/16/2022] [Indexed: 11/03/2022]
Abstract
Optical coherence tomography (OCT) is a powerful noninvasive imaging technique for detecting microvascular abnormalities. Following optical imaging principles, an OCT image will be blurred in the out-of-focus domain. Digital deconvolution is a commonly used method for image deblurring. However, the accuracy of traditional digital deconvolution methods, e.g., the Richardson-Lucy method, depends on the prior knowledge of the point spread function (PSF), which varies with the imaging depth and is difficult to determine. In this paper, a spatially adaptive blind deconvolution framework is proposed for recovering clear OCT images from blurred images without a known PSF. First, a depth-dependent PSF is derived from the Gaussian beam model. Second, the blind deconvolution problem is formalized as a regularized energy minimization problem using the least squares method. Third, the clear image and imaging depth are simultaneously recovered from blurry images using an alternating optimization method. To improve the computational efficiency of the proposed method, an accelerated alternating optimization method is proposed based on the convolution theorem and Fourier transform. The proposed method is numerically implemented with various regularization terms, including total variation, Tikhonov, and l1 norm terms. The proposed method is used to deblur synthetic and experimental OCT images. The influence of the regularization term on the deblurring performance is discussed. The results show that the proposed method can accurately deblur OCT images. The proposed acceleration method can significantly improve the computational efficiency of blind demodulation methods.
Collapse
Affiliation(s)
- Wenxue Dong
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China
| | - Yina Du
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China
| | - Jingjiang Xu
- School of Physics and Optoelectronic Engineering, Foshan University, Foshan, China
| | - Feng Dong
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China
| | - Shangjie Ren
- Tianjin Key Laboratory of Process Measurement and Control, School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
| |
Collapse
|
16
|
A Deconvolutional Deblurring Algorithm Based on Dual-Channel Images. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12104864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Aiming at the motion blur restoration of large-scale dual-channel space-variant images, this paper proposes a dual-channel image deblurring method based on the idea of block aggregation, by studying imaging principles and existing algorithms. The study first analyzed the model of dual-channel space-variant imaging, reconstructed the kernel estimation process using the side prior information from the correlation of the two-channel images, and then used a clustering algorithm to classify kernels and restore the images. In the kernel estimation process, the study proposed two kinds of regularization terms. One is based on image correlation, and the other is based on the information from another channel input. In the image restoration process, the mean-shift clustering algorithm was used to calculate the block image kernel weights and reconstruct the final restored image according to the weights. As the experimental section shows, the restoration effect of this algorithm was better than that of the other compared algorithms.
Collapse
|
17
|
Abstract
Localization and mapping technologies are of great importance for all varieties of Unmanned Aerial Vehicles (UAVs) to perform their operations. In the near future, it is planned to increase the use of micro/nano-size UAVs. Such vehicles are sometimes expendable platforms, and reuse may not be possible. Compact, mounted and low-cost cameras are preferred in these UAVs due to weight, cost and size limitations. Visual simultaneous localization and mapping (vSLAM) methods are used for providing situational awareness of micro/nano-size UAVs. Fast rotational movements that occur during flight with gimbal-free, mounted cameras cause motion blur. Above a certain level of motion blur, tracking losses exist, which causes vSLAM algorithms not to operate effectively. In this study, a novel vSLAM framework is proposed that prevents the occurrence of tracking losses in micro/nano-UAVs due to the motion blur. In the proposed framework, the blur level of the frames obtained from the platform camera is determined and the frames whose focus measure score is below the threshold are restored by specific motion-deblurring methods. The major reasons of tracking losses have been analyzed with experimental studies, and vSLAM algorithms have been made durable by our studied framework. It has been observed that our framework can prevent tracking losses at 5, 10 and 20 fps processing speeds. vSLAM algorithms continue to normal operations at those processing speeds that have not been succeeded before using standard vSLAM algorithms, which can be considered as a superiority of our study.
Collapse
|
18
|
Abstract
Blind image deblurring is a well-known ill-posed inverse problem in the computer vision field. To make the problem well-posed, this paper puts forward a plain but effective regularization method, namely spectral norm regularization (SN), which can be regarded as the symmetrical form of the spectral norm. This work is inspired by the observation that the SN value increases after the image is blurred. Based on this observation, a blind deblurring algorithm (BDA-SN) is designed. BDA-SN builds a deblurring estimator for the image degradation process by investigating the inherent properties of SN and an image gradient. Compared with previous image regularization methods, SN shows more vital abilities to differentiate clear and degraded images. Therefore, the SN of an image can effectively help image deblurring in various scenes, such as text, face, natural, and saturated images. Qualitative and quantitative experimental evaluations demonstrate that BDA-SN can achieve favorable performances on actual and simulated images, with the average PSNR reaching 31.41, especially on the benchmark dataset of Levin et al.
Collapse
|
19
|
Huang H, Sun Z, Liu S, Di Y, Xu J, Liu C, Xu R, Song H, Zhan S, Wu J. Underwater hyperspectral imaging for in situ underwater microplastic detection. SCIENCE OF THE TOTAL ENVIRONMENT 2021; 776:145960. [DOI: 10.1016/j.scitotenv.2021.145960] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2024]
|
20
|
Sun S, Duan L, Xu Z, Zhang J. Blind Deblurring Based on Sigmoid Function. SENSORS 2021; 21:s21103484. [PMID: 34067684 PMCID: PMC8156062 DOI: 10.3390/s21103484] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Revised: 05/10/2021] [Accepted: 05/12/2021] [Indexed: 11/16/2022]
Abstract
Blind image deblurring, also known as blind image deconvolution, is a long-standing challenge in the field of image processing and low-level vision. To restore a clear version of a severely degraded image, this paper proposes a blind deblurring algorithm based on the sigmoid function, which constructs novel blind deblurring estimators for both the original image and the degradation process by exploring the excellent property of sigmoid function and considering image derivative constraints. Owing to these symmetric and non-linear estimators of low computation complexity, high-quality images can be obtained by the algorithm. The algorithm is also extended to image sequences. The sigmoid function enables the proposed algorithm to achieve state-of-the-art performance in various scenarios, including natural, text, face, and low-illumination images. Furthermore, the method can be extended naturally to non-uniform deblurring. Quantitative and qualitative experimental evaluations indicate that the algorithm can remove the blur effect and improve the image quality of actual and simulated images. Finally, the use of sigmoid function provides a new approach to algorithm performance optimization in the field of image restoration.
Collapse
Affiliation(s)
- Shuhan Sun
- Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China; (S.S.); (L.D.); (Z.X.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Lizhen Duan
- Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China; (S.S.); (L.D.); (Z.X.)
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Zhiyong Xu
- Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China; (S.S.); (L.D.); (Z.X.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
| | - Jianlin Zhang
- Key Laboratory of Optical Engineering, Chinese Academy of Sciences, Chengdu 610209, China; (S.S.); (L.D.); (Z.X.)
- Institute of Optics and Electronics, Chinese Academy of Sciences, Chengdu 610209, China
- Correspondence: ; Tel.: +86-135-5013-5646
| |
Collapse
|
21
|
Huang L, Xia Y, Ye T. Effective Blind Image Deblurring Using Matrix-Variable Optimization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:4653-4666. [PMID: 33886469 DOI: 10.1109/tip.2021.3073856] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Blind image deblurring has been a challenging issue due to the unknown blur and computation problem. Recently, the matrix-variable optimization method successfully demonstrates its potential advantages in computation. This paper proposes an effective matrix-variable optimization method for blind image deblurring. Blur kernel matrix is exactly decomposed by a direct SVD technique. The blur kernel and original image are well estimated by minimizing a matrix-variable optimization problem with blur kernel constraints. A matrix-type alternative iterative algorithm is proposed to solve the matrix-variable optimization problem. Finally, experimental results show that the proposed blind image deblurring method is much superior to the state-of-the-art blind image deblurring algorithms in terms of image quality and computation time.
Collapse
|
22
|
Akpinar U, Sahin E, Meem M, Menon R, Gotchev A. Learning Wavefront Coding for Extended Depth of Field Imaging. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3307-3320. [PMID: 33625984 DOI: 10.1109/tip.2021.3060166] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Depth of field is an important factor of imaging systems that highly affects the quality of the acquired spatial information. Extended depth of field (EDoF) imaging is a challenging ill-posed problem and has been extensively addressed in the literature. We propose a computational imaging approach for EDoF, where we employ wavefront coding via a diffractive optical element (DOE) and we achieve deblurring through a convolutional neural network. Thanks to the end-to-end differentiable modeling of optical image formation and computational post-processing, we jointly optimize the optical design, i.e., DOE, and the deblurring through standard gradient descent methods. Based on the properties of the underlying refractive lens and the desired EDoF range, we provide an analytical expression for the search space of the DOE, which is instrumental in the convergence of the end-to-end network. We achieve superior EDoF imaging performance compared to the state of the art, where we demonstrate results with minimal artifacts in various scenarios, including deep 3D scenes and broadband imaging.
Collapse
|
23
|
Gu C, Lu X, He Y, Zhang C. Blur Removal Via Blurred-Noisy Image Pair. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2020; 30:345-359. [PMID: 33186109 DOI: 10.1109/tip.2020.3036745] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Complex blur such as the mixup of space-variant and space-invariant blur, which is hard to model mathematically, widely exists in real images. In this article, we propose a novel image deblurring method that does not need to estimate blur kernels. We utilize a pair of images that can be easily acquired in low-light situations: (1) a blurred image taken with low shutter speed and low ISO noise; and (2) a noisy image captured with high shutter speed and high ISO noise. Slicing the blurred image into patches, we extend the Gaussian mixture model (GMM) to model the underlying intensity distribution of each patch using the corresponding patches in the noisy image. We compute patch correspondences by analyzing the optical flow between the two images. The Expectation Maximization (EM) algorithm is utilized to estimate the parameters of GMM. To preserve sharp features, we add an additional bilateral term to the objective function in the M-step. We eventually add a detail layer to the deblurred image for refinement. Extensive experiments on both synthetic and real-world data demonstrate that our method outperforms state-of-the-art techniques, in terms of robustness, visual quality, and quantitative metrics.
Collapse
|
24
|
Wei XX, Zhang L, Huang H. High-quality blind defocus deblurring of multispectral images with optics and gradient prior. OPTICS EXPRESS 2020; 28:10683-10704. [PMID: 32225647 DOI: 10.1364/oe.390158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Accepted: 03/19/2020] [Indexed: 06/10/2023]
Abstract
This paper presents a blind defocus deblurring method that produces high-quality deblurred multispectral images. The high quality is achieved by two means: i) more accurate kernel estimation based on the optics prior by simulating the simple lens imaging, and ii) the gradient-based inter-channel correlation with the reference image generated by the content-adaptive combination of adjacent channels for restoring the latent sharp image. As a result, our method gains the prominence on both effectiveness and efficiency in deblurring defocus multispectral images with very good restoration on the obscure details. The experiments on some multispectral image datasets demonstrate the advantages of our method over state-of-the-art deblurring methods.
Collapse
|
25
|
|