1
|
Yang P, Liu M, Dong L, Kong L, Zhao Y, Hui M. Spatially varying defocus map estimation from a single image based on spatial aliasing sampling method. OPTICS EXPRESS 2024; 32:8959-8973. [PMID: 38571141 DOI: 10.1364/oe.519059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 02/07/2024] [Indexed: 04/05/2024]
Abstract
In current optical systems, defocus blur is inevitable due to the constrained depth of field. However, it is difficult to accurately identify the defocus amount at each pixel position as the point spread function changes spatially. In this paper, we introduce a histogram-invariant spatial aliasing sampling method for reconstructing all-in-focus images, which addresses the challenge of insufficient pixel-level annotated samples, and subsequently introduces a high-resolution network for estimating spatially varying defocus maps from a single image. The accuracy of the proposed method is evaluated on various synthetic and real data. The experimental results demonstrate that our proposed model outperforms state-of-the-art methods for defocus map estimation significantly.
Collapse
|
2
|
Yuan T, Jiang W, Ye Y, Hai Y, Yi D. Confocal microscopy based on dual blur depth measurement. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:2002-2007. [PMID: 38038065 DOI: 10.1364/josaa.499900] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Accepted: 10/05/2023] [Indexed: 12/02/2023]
Abstract
In this paper, we propose a confocal microscopy based on dual blur depth measurement (DBCM). The first blur is defocus blur, and the second blur is artificial convolutional blur. First, the DBCM blurs the defocus image using a known Gaussian kernel and calculates the edge gradient ratio between it and the re-blurred image. Then, the axial measurement of edge positions is based on a calibration measurement curve. Finally, depth information is inferred from the edges using the original image. Experiments show that the DBCM can achieve depth measurement in a single image. In a 10×/0.25 objective, the error measured for a step sample of 4.7397 µm is 0.23 µm. The relative error rate is 4.8%.
Collapse
|
3
|
Cao J, Chen Z, Jin M, Tian Y. An improved defocusing adaptive style transfer method based on a stroke pyramid. PLoS One 2023; 18:e0284742. [PMID: 37093872 PMCID: PMC10124845 DOI: 10.1371/journal.pone.0284742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 04/06/2023] [Indexed: 04/25/2023] Open
Abstract
Image style transfer aims to assign a specified artist's style to a real image. However, most existing methods cannot generate textures of various thicknesses due to the rich semantic information of the input image. The image loses some semantic information through style transfer with a uniform stroke size. To address the above problems, we propose an improved multi-stroke defocus adaptive style transfer framework based on a stroke pyramid, which mainly fuses various stroke sizes in the image spatial dimension to enhance the image content interpretability. We expand the receptive field of each branch and then fuse the features generated by the multiple branches based on defocus degree. Finally, we add an additional loss term to enhance the structural features of the generated image. The proposed model is trained using the Common Objects in Context (COCO) and Synthetic Depth of Field (SYNDOF) datasets, and the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are used to evaluate the overall quality of the output image and its structural similarity with the content image, respectively. To validate the feasibility of the proposed algorithm, we compare the average PSNR and SSIM values of the output of the modified model and those of the original model. The experimental results show that the modified model improves the PSNR and SSIM values of the outputs by 1.43 and 0.12 on average, respectively. Compared with the single-stroke style transfer method, the framework proposed in this study improves the readability of the output images with more abundant visual expression.
Collapse
Affiliation(s)
- Jianfang Cao
- Department of Computer Science & Technology, Xinzhou Normal University, Xinzhou, China
- School of Computer Science & Technology, Taiyuan University of Science and Technology, Taiyuan, China
| | - Zeyu Chen
- Department of Computer Science & Technology, Xinzhou Normal University, Xinzhou, China
- School of Computer Science & Technology, Taiyuan University of Science and Technology, Taiyuan, China
| | - Mengyan Jin
- Department of Computer Science & Technology, Xinzhou Normal University, Xinzhou, China
- School of Computer Science & Technology, Taiyuan University of Science and Technology, Taiyuan, China
| | - Yun Tian
- Department of Computer Science & Technology, Xinzhou Normal University, Xinzhou, China
| |
Collapse
|
4
|
Half-Period Gray-Level Coding Strategy for Absolute Phase Retrieval. PHOTONICS 2022. [DOI: 10.3390/photonics9070492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
N-ary gray-level (nGL) coding strategy is an effective method for absolute phase retrieval in the fringe projection technique. However, the conventional nGL method contains many unwrapping errors at the boundaries of codewords. In addition, the number of codewords is limited in only one pattern. Consequently, this paper proposes a new gray-level coding method based on half-period coding, which can improve both these two deficiencies. Specifically, we embed every period with a 2-bit codeword, instead of a 1-bit codeword. Then, special correction and decoding methods are proposed to correct the codewords and calculate the fringe orders, respectively. The proposed method can generate n2 codewords with n gray levels in one pattern. Moreover, this method is insensitive to moderate image blurring. Various experiments demonstrate the robustness and effectiveness of the proposed strategy.
Collapse
|
5
|
Hierarchical edge-aware network for defocus blur detection. COMPLEX INTELL SYST 2022. [DOI: 10.1007/s40747-022-00711-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractDefocus blur detection (DBD) aims to separate blurred and unblurred regions for a given image. Due to its potential and practical applications, this task has attracted much attention. Most of the existing DBD models have achieved competitive performance by aggregating multi-level features extracted from fully convolutional networks. However, they also suffer from several challenges, such as coarse object boundaries of the defocus blur regions, background clutter, and the detection of low contrast focal regions. In this paper, we develop a hierarchical edge-aware network to solve the above problems, to the best of our knowledge, it is the first trial to develop an end-to-end network with edge awareness for DBD. We design an edge feature extraction network to capture boundary information, a hierarchical interior perception network is used to generate local and global context information, which is helpful to detect the low contrast focal regions. Moreover, a hierarchical edge-aware fusion network is proposed to hierarchically fuse edge information and semantic features. Benefiting from the rich edge information, the fused features can generate more accurate boundaries. Finally, we propose a progressive feature refinement network to refine the output features. Experimental results on two widely used DBD datasets demonstrate that the proposed model outperforms the state-of-the-art approaches.
Collapse
|
6
|
Defocus Blur detection via transformer encoder and edge guidance. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03303-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
7
|
Tang C, Liu X, Zheng X, Li W, Xiong J, Wang L, Zomaya AY, Longo A. DeFusionNET: Defocus Blur Detection via Recurrently Fusing and Refining Discriminative Multi-Scale Deep Features. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:955-968. [PMID: 32759080 DOI: 10.1109/tpami.2020.3014629] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Albeit great success has been achieved in image defocus blur detection, there are still several unsolved challenges, e.g., interference of background clutter, scale sensitivity and missing boundary details of blur regions. To deal with these issues, we propose a deep neural network which recurrently fuses and refines multi-scale deep features (DeFusionNet) for defocus blur detection. We first fuse the features from different layers of FCN as shallow features and semantic features, respectively. Then, the fused shallow features are propagated to deep layers for refining the details of detected defocus blur regions, and the fused semantic features are propagated to shallow layers to assist in better locating blur regions. The fusion and refinement are carried out recurrently. In order to narrow the gap between low-level and high-level features, we embed a feature adaptation module before feature propagating to exploit the complementary information as well as reduce the contradictory response of different feature layers. Since different feature channels are with different extents of discrimination for detecting blur regions, we design a channel attention module to select discriminative features for feature refinement. Finally, the output of each layer at last recurrent step are fused to obtain the final result. We collect a new dataset consists of various challenging images and their pixel-wise annotations for promoting further study. Extensive experiments on two commonly used datasets and our newly collected one are conducted to demonstrate both the efficacy and efficiency of DeFusionNet.
Collapse
|
8
|
Honarvar Shakibaei Asli B, Zhao Y, Erkoyuncu JA. Motion blur invariant for estimating motion parameters of medical ultrasound images. Sci Rep 2021; 11:14312. [PMID: 34253807 PMCID: PMC8275601 DOI: 10.1038/s41598-021-93636-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Accepted: 06/22/2021] [Indexed: 11/15/2022] Open
Abstract
High-quality medical ultrasound imaging is definitely concerning motion blur, while medical image analysis requires motionless and accurate data acquired by sonographers. The main idea of this paper is to establish some motion blur invariant in both frequency and moment domain to estimate the motion parameters of ultrasound images. We propose a discrete model of point spread function of motion blur convolution based on the Dirac delta function to simplify the analysis of motion invariant in frequency and moment domain. This model paves the way for estimating the motion angle and length in terms of the proposed invariant features. In this research, the performance of the proposed schemes is compared with other state-of-the-art existing methods of image deblurring. The experimental study performs using fetal phantom images and clinical fetal ultrasound images as well as breast scans. Moreover, to validate the accuracy of the proposed experimental framework, we apply two image quality assessment methods as no-reference and full-reference to show the robustness of the proposed algorithms compared to the well-known approaches.
Collapse
Affiliation(s)
- Barmak Honarvar Shakibaei Asli
- Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK. .,Czech Academy of Sciences, Institute of Information Theory and Automation, Pod vodárenskou věží 4, 18208, Prague 8, Czech Republic.
| | - Yifan Zhao
- Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK
| | - John Ahmet Erkoyuncu
- Centre for Life-Cycle Engineering and Management, School of Aerospace, Transport and Manufacturing, Cranfield University, Cranfield, Bedfordshire, MK43 0AL, UK
| |
Collapse
|
9
|
Zia A, Zhou J, Gao Y. Exploring Chromatic Aberration and Defocus Blur for Relative Depth Estimation From Monocular Hyperspectral Image. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:4357-4370. [PMID: 33848246 DOI: 10.1109/tip.2021.3071682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
This article investigates spectral chromatic and spatial defocus aberration in a monocular hyperspectral image (HSI) and proposes methods on how these cues can be utilized for relative depth estimation. The main aim of this work is to develop a framework by exploring intrinsic and extrinsic reflectance properties in HSI that can be useful for depth estimation. Depth estimation from a monocular image is a challenging task. An additional level of difficulty is added due to low resolution and noises in hyperspectral data. Our contribution to handling depth estimation in HSI is threefold. Firstly, we propose that change in focus across band images of HSI due to chromatic aberration and band-wise defocus blur can be integrated for depth estimation. Novel methods are developed to estimate sparse depth maps based on different integration models. Secondly, by adopting manifold learning, an effective objective function is developed to combine all sparse depth maps into a final optimized sparse depth map. Lastly, a new dense depth map generation approach is proposed, which extrapolate sparse depth cues by using material-based properties on graph Laplacian. Experimental results show that our methods successfully exploit HSI properties to generate depth cues. We also compare our method with state-of-the-art RGB image-based approaches, which shows that our methods produce better sparse and dense depth maps than those from the benchmark methods.
Collapse
|
10
|
Xiao X, Yang F, Sadovnik A. MSDU-Net: A Multi-Scale Dilated U-Net for Blur Detection. SENSORS (BASEL, SWITZERLAND) 2021; 21:1873. [PMID: 33800173 PMCID: PMC7962445 DOI: 10.3390/s21051873] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 02/27/2021] [Accepted: 03/02/2021] [Indexed: 11/24/2022]
Abstract
A blur detection problem which aims to separate the blurred and clear regions of an image is widely used in many important computer vision tasks such object detection, semantic segmentation, and face recognition, attracting increasing attention from researchers and industry in recent years. To improve the quality of the image separation, many researchers have spent enormous efforts on extracting features from various scales of images. However, the matter of how to extract blur features and fuse these features synchronously is still a big challenge. In this paper, we regard blur detection as an image segmentation problem. Inspired by the success of the U-net architecture for image segmentation, we propose a multi-scale dilated convolutional neural network called MSDU-net. In this model, we design a group of multi-scale feature extractors with dilated convolutions to extract textual information at different scales at the same time. The U-shape architecture of the MSDU-net can fuse the different-scale texture features and generated semantic features to support the image segmentation task. We conduct extensive experiments on two classic public benchmark datasets and show that the MSDU-net outperforms other state-of-the-art blur detection approaches.
Collapse
Affiliation(s)
- Xiao Xiao
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China;
| | - Fan Yang
- School of Telecommunications Engineering, Xidian University, Xi’an 710071, China;
| | - Amir Sadovnik
- Department of Electrical Engineering & Computer Science, The University of Tennessee, Knoxville, TN 37996, USA;
| |
Collapse
|
11
|
Hu Y, Liu Z, Yang D, Quan C. Online fringe pitch selection for defocusing a binary square pattern projection phase-shifting method. OPTICS EXPRESS 2020; 28:30710-30725. [PMID: 33115066 DOI: 10.1364/oe.409046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Accepted: 09/23/2020] [Indexed: 06/11/2023]
Abstract
A three-dimensional (3D) shape measurement system using defocusing binary fringe projection can perform high-speed and flexible measurements. In this technology, determining the fringe pitch that matches the current projection's defocus amount is of great significance for an accurate measurement. In this paper, we propose an online binary fringe pitch selection framework. First, by analyzing the fringe images captured by the camera, the defocus amount of projection can be obtained. Next, based on analysis of the harmonic error and camera noise, we establish a mathematical model of the normalized phase error. The fringe pitch that minimizes this normalized phase error is then selected as the optimal fringe pitch for subsequent measurements, which can also lead to more accuracy and robust measurement results. Compared with current methods, our method does not require offline defocus-distance calibration. However, it can achieve the same effect as the offline calibration method. It is also more flexible and efficient. Our experiments validate the effectiveness and practicability of the proposed method.
Collapse
|
12
|
Zhao W, Zhao F, Wang D, Lu H. Defocus Blur Detection via Multi-Stream Bottom-Top-Bottom Network. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2020; 42:1884-1897. [PMID: 30908190 DOI: 10.1109/tpami.2019.2906588] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Defocus blur detection (DBD) is aimed to estimate the probability of each pixel being in-focus or out-of-focus. This process has been paid considerable attention due to its remarkable potential applications. Accurate differentiation of homogeneous regions and detection of low-contrast focal regions, as well as suppression of background clutter, are challenges associated with DBD. To address these issues, we propose a multi-stream bottom-top-bottom fully convolutional network (BTBNet), which is the first attempt to develop an end-to-end deep network to solve the DBD problems. First, we develop a fully convolutional BTBNet to gradually integrate nearby feature levels of bottom to top and top to bottom. Then, considering that the degree of defocus blur is sensitive to scales, we propose multi-stream BTBNets that handle input images with different scales to improve the performance of DBD. Finally, a cascaded DBD map residual learning architecture is designed to gradually restore finer structures from the small scale to the large scale. To promote further study and evaluation of the DBD models, we construct a new database of 1100 challenging images and their pixel-wise defocus blur annotations. Experimental results on the existing and our new datasets demonstrate that the proposed method achieves significantly better performance than other state-of-the-art algorithms.
Collapse
|
13
|
Zeng K, Wang Y, Mao J, Liu J, Peng W, Chen N. A Local Metric for Defocus Blur Detection Based on CNN Feature Learning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:2107-2115. [PMID: 30452362 DOI: 10.1109/tip.2018.2881830] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Defocus blur detection is an important and challenging task in computer vision and digital imaging fields. Previous work on defocus blur detection has put a lot of effort into designing local sharpness metric maps. This paper presents a simple yet effective method to automatically obtain the local metric map for defocus blur detection, which based on the feature learning of multiple convolutional neural networks (ConvNets). The ConvNets automatically learn the most locally relevant features at the super-pixel level of the image in a supervised manner. By extracting convolution kernels from the trained neural network structures and processing it with principal component analysis, we can automatically obtain the local sharpness metric by reshaping the principal component vector. Meanwhile, an effective iterative updating mechanism is proposed to refine the defocus blur detection result from coarse to fine by exploiting the intrinsic peculiarity of the hyperbolic tangent function. The experimental results demonstrate that our proposed method consistently performed better than the previous state-of-the-art methods.
Collapse
|
14
|
Analysis of Blur Measure Operators for Single Image Blur Segmentation. APPLIED SCIENCES-BASEL 2018. [DOI: 10.3390/app8050807] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
15
|
Karaali A, Jung CR. Edge-Based Defocus Blur Estimation With Adaptive Scale Selection. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:1126-1137. [PMID: 29220316 DOI: 10.1109/tip.2017.2771563] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Objects that do not lie at the focal distance of a digital camera generate defocused regions in the captured image. This paper presents a new edge-based method for spatially varying defocus blur estimation using a single image based on reblurred gradient magnitudes. The proposed approach initially computes a scale-consistent edge map of the input image and selects a local reblurring scale aiming to cope with noise, edge mis-localization, and interfering edges. An initial blur estimate is computed at the detected scale-consistent edge points and a novel connected edge filter is proposed to smooth the sparse blur map based on pixel connectivity within detected edge contours. Finally, a fast guided filter is used to propagate the sparse blur map through the whole image. Experimental results show that the proposed approach presents a very good compromise between estimation error and running time when compared with the state-of-the-art methods. We also explore our blur estimation method in the context of image deblurring, and show that metrics typically used to evaluate blur estimation may not correlate as expected with the visual quality of the deblurred image.
Collapse
|
16
|
Yu X, Zhao X, Sui Y, Zhang L. Handling noise in single image defocus map estimation by using directional filters. OPTICS LETTERS 2014; 39:6281-6284. [PMID: 25361334 DOI: 10.1364/ol.39.006281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
State-of-the-art defocus map estimation methods are sensitive to image noise. Even a small amount of noise can degrade defocus map estimation dramatically. However, directly applying image denoising methods often changes edge profiles, thus leading to inaccurate defocus estimation. In this Letter, we propose a new method for estimating a defocus map from a noisy image. We observe that after using a directional low-pass filter to an input image, noise is greatly reduced while the edges orthogonal to the directional filter are well preserved. Based on this observation, we apply a series of directional filters at different orientations, and then estimate the blur amount of the edges, which are orthogonal to the direction of the filter in each filtered image. In order to obtain a full defocus map, we propagate the blur amount estimated at edges to the entire image by an edge-aware interpolation method. Experimental results on synthetic and real data demonstrate that our method can estimate defocus maps better than the state-of-the-art approaches.
Collapse
|
17
|
Image deconvolution by means of frequency blur invariant concept. ScientificWorldJournal 2014; 2014:951842. [PMID: 25202743 PMCID: PMC4147381 DOI: 10.1155/2014/951842] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2014] [Accepted: 07/29/2014] [Indexed: 12/03/2022] Open
Abstract
Different blur invariant descriptors have been proposed so far, which are either in the spatial domain or based on the properties available in the moment domain. In this paper, a frequency framework is proposed to develop blur invariant features that are used to deconvolve a degraded image caused by a Gaussian blur. These descriptors are obtained by establishing an equivalent relationship between the normalized Fourier transforms of the blurred and original images, both normalized by their respective fixed frequencies set to one. Advantage of using the proposed invariant descriptors is that it is possible to estimate both the point spread function (PSF) and the original image. The performance of frequency invariants will be demonstrated through experiments. An image deconvolution is done as an additional application to verify the proposed blur invariant features.
Collapse
|