1
|
Hong H, Zuo Z, Shi Y, Hua X, Xiong L, Zhang Y, Zhang T. Adaptive anisotropic pixel-by-pixel correction method for a space-variant degraded image. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1686-1697. [PMID: 37707005 DOI: 10.1364/josaa.490150] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 07/19/2023] [Indexed: 09/15/2023]
Abstract
Large field-of-view optical imaging systems often face challenges in the presence of space-variant degradation. The existence of degradation leads to target detection and recognition being difficult or even unsuccessful. To address this issue, this paper proposes an adaptive anisotropic pixel-by-pixel space-variant correction method. First, we estimated region acquisition of local space-variant point spread functions (PSFs) based on Haar wavelet degradation degree distribution, and obtained initial PSF matrix estimation with inverse distance weighted spatial interpolation. Then, we established a pixel-by-pixel space-variant correction model based on the PSF matrix. Third, we imposed adaptive sparse regularization terms of the Haar wavelet based on the adaptive anisotropic iterative reweight strategy and non-negative regularization terms as the constraint in the pixel-by-pixel space-variant correction model. Finally, as the correction process is refined to each pixel, the split-Bregman multivariate separation solution algorithm was employed for the pixel-by-pixel spare-variant correction model to estimate the final PSF matrix and the gray value of each pixel. Through this algorithm, the "whole image correction" and "block correction" is avoided, the "pixel-by-pixel correction" is realized, and the final corrected images are obtained. Experimental results show that compared with the current advanced correction methods, the proposed approach in the space-variant wide field correction of a degraded image shows better performance in preserving the image details and texture information.
Collapse
|
2
|
Xu C, Zhang C, Ma M, Zhang J. Blind image deconvolution via an adaptive weighted TV regularization. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2023. [DOI: 10.3233/jifs-223828] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Blind image deconvolution has attracted growing attention in image processing and computer vision. The total variation (TV) regularization can effectively preserve image edges. However, due to lack of self-adaptability, it does not perform very well on restoring images with complex structures. In this paper, we propose a new blind image deconvolution model using an adaptive weighted TV regularization. This model can better handle local features of image. Numerically, we design an effective alternating direction method of multipliers (ADMM) to solve this non-smooth model. Experimental results illustrate the superiority of the proposed method compared with other related blind deconvolution methods.
Collapse
Affiliation(s)
- Chenguang Xu
- Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| | - Chao Zhang
- Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| | - Mingxi Ma
- College of Science, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| | - Jun Zhang
- College of Science, Nanchang Institute of Technology, Nanchang, Jiangxi, China
- Jiangxi Province Key Laboratory of Water Information Cooperative Sensing and Intelligent Processing, Nanchang Institute of Technology, Nanchang, Jiangxi, China
| |
Collapse
|
3
|
Fang S, Zhuang S, Li X, Li H. Automated release rate inversion and plume bias correction for atmospheric radionuclide leaks: A robust and general remediation to imperfect radionuclide transport modeling. THE SCIENCE OF THE TOTAL ENVIRONMENT 2021; 754:142140. [PMID: 33254859 DOI: 10.1016/j.scitotenv.2020.142140] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2020] [Revised: 08/12/2020] [Accepted: 08/31/2020] [Indexed: 06/12/2023]
Abstract
The release rate is vital in assessing the international environmental risk of atmospheric radionuclide leaks, and it usually can only be obtained through inversion. However, such inversion is vulnerable to the inevitable plume biases in radionuclide transport modeling, leading to inaccurate estimates and risk assessment. This paper describes an automated method that estimates the release rate while comprehensively correcting plume biases, including both the plume range and transport pattern. The spatial correlation of predictions is used to simplify the difficult task of direct plume adjustment to that of tuning the predictions inside a correlation-adjusted plume. An ensemble-based algorithm is proposed to automatically calculate the spatial correlation. The proposed method is validated using two radionuclide transport models with mild and severe plume biases and data from two wind tunnel experiments, and its performance is compared with that of the standard approach and a recent state-of-the-art method. The results demonstrate that our method corrects the plume biases with high accuracy (Pearson's Correlation Coefficient = 1.0000, Normalized Mean Square Error ≤ 1.03 × 10-3) and reduces the estimation error by nearly two orders of magnitude at best. The proposed approach achieves near-optimal performance with fully automated parameterization, keeping the lowest error levels in our validation cases for various measurement sets.
Collapse
Affiliation(s)
- Sheng Fang
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China.
| | - Shuhan Zhuang
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Xinpeng Li
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China; School of Nuclear Science and Engineering, North China Electric Power University, Beijing 102206, China
| | - Hong Li
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China
| |
Collapse
|
4
|
Jumbo OE, Asfour S, Sayed AM, Abdel-Mottaleb M. Correcting Higher Order Aberrations Using Image Processing. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:2276-2287. [PMID: 33471764 DOI: 10.1109/tip.2021.3051499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Higher Order Aberrations (HOAs) are complex refractive errors in the human eye that cannot be corrected by regular lens systems. Researchers have developed numerous approaches to analyze the effect of these refractive errors; the most popular among these approaches use Zernike polynomial approximation to describe the shape of the wavefront of light exiting the pupil after it has been altered by the refractive errors. We use this wavefront shape to create a linear imaging system that simulates how the eye perceives source images at the retina. With phase information from this system, we create a second linear imaging system to modify source images so that they would be perceived by the retina without distortion. By modifying source images, the visual process cascades two optical systems before the light reaches the retina, a technique that counteracts the effect of the refractive errors. While our method effectively compensates for distortions induced by HOAs, it also introduces blurring and loss of contrast; a problem that we address with Total Variation Regularization. With this technique, we optimize source images so that they are perceived at the retina as close as possible to the original source image. To measure the effectiveness of our methods, we compute the Euclidean error between the source images and the images perceived at the retina. When comparing our results with existing corrective methods that use deconvolution and total variation regularization, we achieve an average of 50% reduction in error with lower computational costs.
Collapse
|
5
|
Li L, Lu W, Tan S. Variational PET/CT Tumor Co-segmentation Integrated with PET Restoration. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2020; 4:37-49. [PMID: 32939423 DOI: 10.1109/trpms.2019.2911597] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
PET and CT are widely used imaging modalities in radiation oncology. PET imaging has a high contrast but blurry tumor edges due to its limited spatial resolution, while CT imaging has a high resolution but a low contrast between tumor and soft normal tissues. Tumor segmentation from either a single PET or CT image is difficult. It is known that co-segmentation methods utilizing the complementary information between PET and CT can improve segmentation accuracy. These information can be either consistent or inconsistent in the image-level. How to correctly localize tumor edges with these inconsistent information is a major challenge for co-segmentation methods. In this study, we proposed a novel variational method for tumor co-segmentation in PET/CT, with a fusion strategy specifically designed to handle the information inconsistency between PET and CT in an adaptive way - the method can automatically decide which modality should be more trustful when PET and CT disagree to each other for localizing the tumor boundary. The proposed method was constructed based on the Γ-convergence approximation of the Mumford-Shah (MS) segmentation model. A PET restoration process was integrated into the co-segmentation process, which further eliminate the uncertainty for tumor segmentation introduced by the blurring of tumor edges in PET. The performance of the proposed method was validated on a test dataset with fifty non-small cell lung cancer patients. Experimental results demonstrated that the proposed method had a high accuracy for PET/CT co-segmentation and PET restoration, and can accurately estimate the blur kernel of the PET scanner as well. For those complex images in which the tumors exhibit Fluorodeoxyglucose (FDG) uptake inhomogeneity or even invade adjacent soft normal tissues, the proposed method can still accurately segment the tumors. It achieved an average dice similarity indexes (DSI) of 0.85 ± 0.06, volume error (VE) of 0.09 ± 0.08, and classification error (CE) of 0.31 ± 0.13.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
6
|
|
7
|
Kang Y, Mukherjee PS, Qiu P. Efficient Blind Image Deblurring Using Nonparametric Regression and Local Pixel Clustering. Technometrics 2018. [DOI: 10.1080/00401706.2017.1415975] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Yicheng Kang
- Department of Mathematical Sciences, Bentley University, Waltham, MA
| | | | - Peihua Qiu
- Department of Biostatistics, University of Florida, Gainesville, FL
| |
Collapse
|
8
|
Li X, Li H, Liu Y, Xiong W, Fang S. Joint release rate estimation and measurement-by-measurement model correction for atmospheric radionuclide emission in nuclear accidents: An application to wind tunnel experiments. JOURNAL OF HAZARDOUS MATERIALS 2018; 345:48-62. [PMID: 29128726 DOI: 10.1016/j.jhazmat.2017.09.051] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/21/2017] [Accepted: 09/27/2017] [Indexed: 06/07/2023]
Abstract
The release rate of atmospheric radionuclide emissions is a critical factor in the emergency response to nuclear accidents. However, there are unavoidable biases in radionuclide transport models, leading to inaccurate estimates. In this study, a method that simultaneously corrects these biases and estimates the release rate is developed. Our approach provides a more complete measurement-by-measurement correction of the biases with a coefficient matrix that considers both deterministic and stochastic deviations. This matrix and the release rate are jointly solved by the alternating minimization algorithm. The proposed method is generic because it does not rely on specific features of transport models or scenarios. It is validated against wind tunnel experiments that simulate accidental releases in a heterogonous and densely built nuclear power plant site. The sensitivities to the position, number, and quality of measurements and extendibility of the method are also investigated. The results demonstrate that this method effectively corrects the model biases, and therefore outperforms Tikhonov's method in both release rate estimation and model prediction. The proposed approach is robust to uncertainties and extendible with various center estimators, thus providing a flexible framework for robust source inversion in real accidents, even if large uncertainties exist in multiple factors.
Collapse
Affiliation(s)
- Xinpeng Li
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Hong Li
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Yun Liu
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Wei Xiong
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China
| | - Sheng Fang
- Institute of Nuclear and New Energy Technology, Collaborative Innovation Centre of Advanced Nuclear Energy Technology, Key Laboratory of Advanced Reactor Engineering and Safety of Ministry of Education, Tsinghua University, Beijing 100084, China.
| |
Collapse
|
9
|
Li L, Wang J, Lu W, Tan S. Simultaneous Tumor Segmentation, Image Restoration, and Blur Kernel Estimation in PET Using Multiple Regularizations. COMPUTER VISION AND IMAGE UNDERSTANDING : CVIU 2017; 155:173-194. [PMID: 28603407 PMCID: PMC5463621 DOI: 10.1016/j.cviu.2016.10.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Accurate tumor segmentation from PET images is crucial in many radiation oncology applications. Among others, partial volume effect (PVE) is recognized as one of the most important factors degrading imaging quality and segmentation accuracy in PET. Taking into account that image restoration and tumor segmentation are tightly coupled and can promote each other, we proposed a variational method to solve both problems simultaneously in this study. The proposed method integrated total variation (TV) semi-blind de-convolution and Mumford-Shah segmentation with multiple regularizations. Unlike many existing energy minimization methods using either TV or L2 regularization, the proposed method employed TV regularization over tumor edges to preserve edge information, and L2 regularization inside tumor regions to preserve the smooth change of the metabolic uptake in a PET image. The blur kernel was modeled as anisotropic Gaussian to address the resolution difference in transverse and axial directions commonly seen in a clinic PET scanner. The energy functional was rephrased using the Γ-convergence approximation and was iteratively optimized using the alternating minimization (AM) algorithm. The performance of the proposed method was validated on a physical phantom and two clinic datasets with non-Hodgkin's lymphoma and esophageal cancer, respectively. Experimental results demonstrated that the proposed method had high performance for simultaneous image restoration, tumor segmentation and scanner blur kernel estimation. Particularly, the recovery coefficients (RC) of the restored images of the proposed method in the phantom study were close to 1, indicating an efficient recovery of the original blurred images; for segmentation the proposed method achieved average dice similarity indexes (DSIs) of 0.79 and 0.80 for two clinic datasets, respectively; and the relative errors of the estimated blur kernel widths were less than 19% in the transversal direction and 7% in the axial direction.
Collapse
Affiliation(s)
- Laquan Li
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Jian Wang
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| | - Wei Lu
- Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, Maryland 21201, USA
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York 10065, USA
| | - Shan Tan
- Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China
| |
Collapse
|
10
|
Xia Y, Leung H, Kamel MS. A discrete-time learning algorithm for image restoration using a novel L2-norm noise constrained estimation. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.06.111] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
11
|
Perrone D, Favaro P. A Clearer Picture of Total Variation Blind Deconvolution. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2016; 38:1041-1055. [PMID: 26372205 DOI: 10.1109/tpami.2015.2477819] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.
Collapse
|
12
|
Faramarzi E, Rajan D, Fernandes FCA, Christensen MP. Blind Super Resolution of Real-Life Video Sequences. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:1544-1555. [PMID: 26849862 DOI: 10.1109/tip.2016.2523344] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Super resolution (SR) for real-life video sequences is a challenging problem due to complex nature of the motion fields. In this paper, a novel blind SR method is proposed to improve the spatial resolution of video sequences, while the overall point spread function of the imaging system, motion fields, and noise statistics are unknown. To estimate the blur(s), first, a nonuniform interpolation SR method is utilized to upsample the frames, and then, the blur(s) is(are) estimated through a multi-scale process. The blur estimation process is initially performed on a few emphasized edges and gradually on more edges as the iterations continue. Also for faster convergence, the blur is estimated in the filter domain rather than the pixel domain. The high-resolution frames are estimated using a cost function that has the fidelity and regularization terms of type Huber-Markov random field to preserve edges and fine details. The fidelity term is adaptively weighted at each iteration using a masking operation to suppress artifacts due to inaccurate motions. Very promising results are obtained for real-life videos containing detailed structures, complex motions, fast-moving objects, deformable regions, or severe brightness changes. The proposed method outperforms the state of the art in all performed experiments through both subjective and objective evaluations. The results are available online at http://lyle.smu.edu/~rajand/Video_SR/.
Collapse
|
13
|
Liu H, Zhang Z, Liu S, Yan L, Liu T, Zhang T. Joint baseline-correction and denoising for Raman spectra. APPLIED SPECTROSCOPY 2015; 69:1013-1022. [PMID: 26688879 DOI: 10.1366/14-07760] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Laser instruments often suffer from the problem of baseline drift and random noise, which greatly degrade spectral quality. In this article, we propose a variation model that combines baseline correction and denoising. First, to guide the baseline estimation, morphological operations are adopted to extract the characteristics of the degraded spectrum. Second, to suppress noise in both the spectrum and baseline, Tikhonov regularization is introduced. Moreover, we describe an efficient optimization scheme that alternates between the latent spectrum estimation and the baseline correction until convergence. The major novel aspect of the proposed algorithms is the estimation of a smooth spectrum and removal of the baseline simultaneously. Results of a comparison with state-of-the-art methods demonstrate that the proposed method outperforms them in both qualitative and quantitative assessments.
Collapse
Affiliation(s)
- Hai Liu
- Central China Normal University, National Engineering Research Center for E-Learning, Wuhan, Hubei 430079, China.
| | | | | | | | | | | |
Collapse
|
14
|
van Gennip Y, Athavale P, Gilles J, Choksi R. A Regularization Approach to Blind Deblurring and Denoising of QR Barcodes. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:2864-2873. [PMID: 25974935 DOI: 10.1109/tip.2015.2432675] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
QR bar codes are prototypical images for which part of the image is a priori known (required patterns). Open source bar code readers, such as ZBar, are readily available. We exploit both these facts to provide and assess purely regularization-based methods for blind deblurring of QR bar codes in the presence of noise.
Collapse
|
15
|
A New Study of Blind Deconvolution with Implicit Incorporation of Nonnegativity Constraints. ACTA ACUST UNITED AC 2015. [DOI: 10.1155/2015/860263] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The inverse problem of image restoration to remove noise and blur in an observed image was extensively studied in the last two decades. For the case of a known blurring kernel (or a known blurring type such as out of focus or Gaussian blur), many effective models and efficient solvers exist. However when the underlying blur is unknown, there have been
fewer developments for modelling the so-called blind deblurring since the early works of You and Kaveh (1996) and Chan and Wong (1998). A major challenge is how to impose the extra constraints to ensure quality of restoration. This paper proposes a new transform based method to impose the positivity constraints automatically and then two numerical solution algorithms. Test results demonstrate the effectiveness and robustness of the proposed method in restoring blurred images.
Collapse
|
16
|
Hintermüller M, Wu T. Bilevel optimization for calibrating point spread functions in blind deconvolution. ACTA ACUST UNITED AC 2015. [DOI: 10.3934/ipi.2015.9.1139] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
17
|
Hanocka R, Kiryati N. Progressive Blind Deconvolution. COMPUTER ANALYSIS OF IMAGES AND PATTERNS 2015. [DOI: 10.1007/978-3-319-23117-4_27] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
|
18
|
Blind Restoration of Remote Sensing Images by a Combination of Automatic Knife-Edge Detection and Alternating Minimization. REMOTE SENSING 2014. [DOI: 10.3390/rs6087491] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
19
|
Faramarzi E, Rajan D, Christensen MP. Unified blind method for multi-image super-resolution and single/multi-image blur deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:2101-2114. [PMID: 23314775 DOI: 10.1109/tip.2013.2237915] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This paper presents, for the first time, a unified blind method for multi-image super-resolution (MISR or SR), single-image blur deconvolution (SIBD), and multi-image blur deconvolution (MIBD) of low-resolution (LR) images degraded by linear space-invariant (LSI) blur, aliasing, and additive white Gaussian noise (AWGN). The proposed approach is based on alternating minimization (AM) of a new cost function with respect to the unknown high-resolution (HR) image and blurs. The regularization term for the HR image is based upon the Huber-Markov random field (HMRF) model, which is a type of variational integral that exploits the piecewise smooth nature of the HR image. The blur estimation process is supported by an edge-emphasizing smoothing operation, which improves the quality of blur estimates by enhancing strong soft edges toward step edges, while filtering out weak structures. The parameters are updated gradually so that the number of salient edges used for blur estimation increases at each iteration. For better performance, the blur estimation is done in the filter domain rather than the pixel domain, i.e., using the gradients of the LR and HR images. The regularization term for the blur is Gaussian (L2 norm), which allows for fast noniterative optimization in the frequency domain. We accelerate the processing time of SR reconstruction by separating the upsampling and registration processes from the optimization procedure. Simulation results on both synthetic and real-life images (from a novel computational imager) confirm the robustness and effectiveness of the proposed method.
Collapse
|
20
|
Zhou J, Wu X, Zhang L. l2 Restoration of l∞-decoded images via soft-decision estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:4797-4807. [PMID: 22692907 DOI: 10.1109/tip.2012.2202672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The l(∞)-constrained image coding is a technique to achieve substantially lower bit rate than strictly (mathematically) lossless image coding, while still imposing a tight error bound at each pixel. However, this technique becomes inferior in the l(2) distortion metric if the bit rate decreases further. In this paper, we propose a new soft decoding approach to reduce the l(2) distortion of l(∞)-decoded images and retain the advantages of both minmax and least-square approximations. The soft decoding is performed in a framework of image restoration that exploits the tight error bounds afforded by the l(∞)-constrained coding and employs a context modeler of quantization errors. Experimental results demonstrate that the l(∞)-constrained hard decoded images can be restored to gain more than 2 dB in peak signal-to-noise ratio PSNR, while still retaining tight error bounds on every single pixel. The new soft decoding technique can even outperform JPEG 2000 (a state-of-the-art encoder-optimized image codec) for bit rates higher than 1 bpp, a critical rate region for applications of near-lossless image compression. All the coding gains are made without increasing the encoder complexity as the heavy computations to gain coding efficiency are delegated to the decoder.
Collapse
Affiliation(s)
- Jiantao Zhou
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON, Canada.
| | | | | |
Collapse
|
21
|
Yan L, Liu H, Zhong S, Fang H. Semi-blind spectral deconvolution with adaptive Tikhonov regularization. APPLIED SPECTROSCOPY 2012; 66:1334-1346. [PMID: 23146190 DOI: 10.1366/11-06256] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Deconvolution has become one of the most used methods for improving spectral resolution. Deconvolution is an ill-posed problem, especially when the point spread function (PSF) is unknown. Non-blind deconvolution methods use a predefined PSF, but in practice the PSF is not known exactly. Blind deconvolution methods estimate the PSF and spectrum simultaneously from the observed spectra, which become even more difficult in the presence of strong noise. In this paper, we present a semi-blind deconvolution method to improve the spectral resolution that does not assume a known PSF but models it as a parametric function in combination with the a priori knowledge about the characteristics of the instrumental response. First, we construct the energy functional, including Tikhonov regularization terms for both the spectrum and the parametric PSF. Moreover, an adaptive weighting term is devised in terms of the magnitude of the first derivative of spectral data to adjust the Tikhonov regularization for the spectrum. Then we minimize the energy functional to obtain the spectrum and the parameters of the PSF. We also discuss how to select the regularization parameters. Comparative results with other deconvolution methods on simulated degraded spectra, as well as on experimental infrared spectra, are presented.
Collapse
Affiliation(s)
- Luxin Yan
- National Key Laboratory of Science and Technology on Multispectral Information Processing, Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan, Hubei, China
| | | | | | | |
Collapse
|
22
|
Liu H, Zhang T, Yan L, Fang H, Chang Y. A MAP-based algorithm for spectroscopic semi-blind deconvolution. Analyst 2012; 137:3862-73. [PMID: 22768389 DOI: 10.1039/c2an16213j] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Spectroscopic data often suffer from common problems of bands overlapping and random noise. In this paper, we show that the issue of overlapping peaks can be considered as a maximum a posterior (MAP) problem and be solved by minimizing an object functional that includes a likelihood term and two prior terms. In the MAP framework, the likelihood probability density function (PDF) is constructed based on a spectral observation model, a robust Huber-Markov model is used as spectra prior PDF, and the kernel prior is described based on a parametric Gaussian function. Moreover, we describe an efficient optimization scheme that alternates between latent spectrum recovery and blur kernel estimation until convergence. The major novelty of the proposed algorithm is that it can estimate the kernel slit width and latent spectrum simultaneously. Comparative results with other deconvolution methods suggest that the proposed method can recover spectral structural details as well as suppress noise effectively.
Collapse
Affiliation(s)
- Hai Liu
- Science and Technology on Multi-spectral Information Processing Laboratory, Institute for Pattern Recognition and Artificial Intelligence, Huazhong University of Science and Technology, Wuhan, Hubei 430074, China
| | | | | | | | | |
Collapse
|
23
|
Schoenemann T, Cremers D. A coding-cost framework for super-resolution motion layer decomposition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:1097-1110. [PMID: 21947524 DOI: 10.1109/tip.2011.2169271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
We consider the problem of decomposing a video sequence into a superposition of (a given number of) moving layers. For this problem, we propose an energy minimization approach based on the coding cost. Our contributions affect both the model (what is minimized) and the algorithmic side (how it is minimized). The novelty of the coding-cost model is the inclusion of a refined model of the image formation process, known as super resolution. This accounts for camera blur and area averaging arising in a physically plausible image formation process. It allows us to extract sharp high-resolution layers from the video sequence. The algorithmic framework is based on an alternating minimization scheme and includes the following innovations. 1) A video labeling, we optimize the layer domains. This allows to regularize the shapes of the layers and a very elegant handling of occlusions. 2) We present an efficient parallel algorithm for extracting super-resolved layers based on TV filtering.
Collapse
|
24
|
Cai JF, Ji H, Liu C, Shen Z. Framelet-based blind motion deblurring from a single image. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:562-572. [PMID: 21843995 DOI: 10.1109/tip.2011.2164413] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
How to recover a clear image from a single motion-blurred image has long been a challenging open problem in digital imaging. In this paper, we focus on how to recover a motion-blurred image due to camera shake. A regularization-based approach is proposed to remove motion blurring from the image by regularizing the sparsity of both the original image and the motion-blur kernel under tight wavelet frame systems. Furthermore, an adapted version of the split Bregman method is proposed to efficiently solve the resulting minimization problem. The experiments on both synthesized images and real images show that our algorithm can effectively remove complex motion blurring from natural images without requiring any prior information of the motion-blur kernel.
Collapse
Affiliation(s)
- Jian-Feng Cai
- Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA.
| | | | | | | |
Collapse
|
25
|
|
26
|
Liao H, Ng MK. Blind deconvolution using generalized cross-validation approach to regularization parameter estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:670-80. [PMID: 20833603 DOI: 10.1109/tip.2010.2073474] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In this paper, we propose and present an algorithm for total variation (TV)-based blind deconvolution. Both the unknown image and blur can be estimated within an alternating minimization framework. With the generalized cross-validation (GCV) method, the regularization parameters associated with the unknown image and blur can be updated in alternating minimization steps. Experimental results confirm that the performance of the proposed algorithm is better than variational Bayesian blind deconvolution algorithms with Student's-t priors or a total variation prior.
Collapse
Affiliation(s)
- Haiyong Liao
- Centre for Mathematical Imaging and Vision and Department of Mathematics, Hong Kong Baptist University, Kowloon Tong, Hong Kong.
| | | |
Collapse
|
27
|
Wen YW, Liu C, Yip AM. Fast splitting algorithm for multiframe total variation blind video deconvolution. APPLIED OPTICS 2010; 49:2761-2768. [PMID: 20490236 DOI: 10.1364/ao.49.002761] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
We consider the recovery of degraded videos without complete knowledge about the degradation. A spatially shift-invariant but temporally shift-varying video formation model is used. This leads to a simple multiframe degradation model that relates each original video frame with multiple observed frames and point spread functions (PSFs). We propose a variational method that simultaneously reconstructs each video frame and the associated PSFs from the corresponding observed frames. Total variation (TV) regularization is used on both the video frames and the PSFs to further reduce the ill-posedness and to better preserve edges. In order to make TV minimization practical for video sequences, we propose an efficient splitting method that generalizes some recent fast single-image TV minimization methods to the multiframe case. Both synthetic and real videos are used to show the performance of the proposed method.
Collapse
Affiliation(s)
- You-Wei Wen
- Department of Mathematics, South China Agricultural University, Guangzhou, China.
| | | | | |
Collapse
|
28
|
Almeida MSC, Almeida LB. Blind and semi-blind deblurring of natural images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2010; 19:36-52. [PMID: 19717362 DOI: 10.1109/tip.2009.2031231] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
A method for blind image deblurring is presented. The method only makes weak assumptions about the blurring filter and is able to undo a wide variety of blurring degradations. To overcome the ill-posedness of the blind image deblurring problem, the method includes a learning technique which initially focuses on the main edges of the image and gradually takes details into account. A new image prior, which includes a new edge detector, is used. The method is able to handle unconstrained blurs, but also allows the use of constraints or of prior information on the blurring filter, as well as the use of filters defined in a parametric manner. Furthermore, it works in both single-frame and multiframe scenarios. The use of constrained blur models appropriate to the problem at hand, and/or of multiframe scenarios, generally improves the deblurring results. Tests performed on monochrome and color images, with various synthetic and real-life degradations, without and with noise, in single-frame and multiframe scenarios, showed good results, both in subjective terms and in terms of the increase of signal to noise ratio (ISNR) measure. In comparisons with other state of the art methods, our method yields better results, and shows to be applicable to a much wider range of blurs.
Collapse
Affiliation(s)
- Mariana S C Almeida
- Instituto de Telecomunicações, Instituto Superior Técnico, 1049-001 Lisboa, Portugal.
| | | |
Collapse
|
29
|
Babacan SD, Molina R, Katsaggelos AK. Variational bayesian blind deconvolution using a total variation prior. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:12-26. [PMID: 19095515 DOI: 10.1109/tip.2008.2007354] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In this paper, we present novel algorithms for total variation (TV) based blind deconvolution and parameter estimation utilizing a variational framework. Using a hierarchical Bayesian model, the unknown image, blur, and hyperparameters for the image, blur, and noise priors are estimated simultaneously. A variational inference approach is utilized so that approximations of the posterior distributions of the unknowns are obtained, thus providing a measure of the uncertainty of the estimates. Experimental results demonstrate that the proposed approaches provide higher restoration performance than non-TV-based methods without any assumptions about the unknown hyperparameters.
Collapse
Affiliation(s)
- S Derin Babacan
- Department of Electrical Engineering and Computer Science, Northwestern University, IL 60208-3118, USA.
| | | | | |
Collapse
|
30
|
Welk M, Nagy JG. Variational Deconvolution of Multi-channel Images with Inequality Constraints. PATTERN RECOGNITION AND IMAGE ANALYSIS 2007. [DOI: 10.1007/978-3-540-72847-4_50] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
31
|
Li D, Mersereau RM, Simske S. Blind Image Deconvolution Through Support Vector Regression. ACTA ACUST UNITED AC 2007; 18:931-5. [PMID: 17526360 DOI: 10.1109/tnn.2007.891622] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This letter introduces a new algorithm for the restoration of a noisy blurred image based on the support vector regression (SVR). Experiments show that the performance of the SVR is very robust in blind image deconvolution where the types of blurs, point spread function (PSF) support, and noise level are all unknown.
Collapse
|
32
|
|
33
|
Molina R, Mateos J, Katsaggelos AK. Blind deconvolution using a variational approach to parameter, image, and blur estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:3715-27. [PMID: 17153945 DOI: 10.1109/tip.2006.881972] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Following the hierarchical Bayesian framework for blind deconvolution problems, in this paper, we propose the use of simultaneous autoregressions as prior distributions for both the image and blur, and gamma distributions for the unknown parameters (hyperparameters) of the priors and the image formation noise. We show how the gamma distributions on the unknown hyperparameters can be used to prevent the proposed blind deconvolution method from converging to undesirable image and blur estimates and also how these distributions can be inferred in realistic situations. We apply variational methods to approximate the posterior probability of the unknown image, blur, and hyperparameters and propose two different approximations of the posterior distribution. One of these approximations coincides with a classical blind deconvolution method. The proposed algorithms are tested experimentally and compared with existing blind deconvolution methods.
Collapse
Affiliation(s)
- Rafael Molina
- Departamento de Ciencias de la Computación e I.A. Universidad de Granada, 18071 Granada, Spain.
| | | | | |
Collapse
|
34
|
Zhulina YV. Multiframe blind deconvolution of heavily blurred astronomical images. APPLIED OPTICS 2006; 45:7342-52. [PMID: 16983424 DOI: 10.1364/ao.45.007342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
A multichannel blind deconvolution algorithm that incorporates the maximum-likelihood image restoration by several estimates of the differently blurred point-spread function (PSF) into the Ayers-Dainty iterative algorithm is proposed. The algorithm uses no restrictions on the image and the PSFs except for the assumption that they are positive. The algorithm employs no cost functions, input parameters, a priori probability distributions, or the analytically specified transfer functions. The iterative algorithm permits its application in the presence of different kinds of distortion. The work presents results of digital modeling and the results of processing real telescope data from several satellites. The proof of convergence of the algorithm to the positive estimates of object and the PSFs is given. The convergence of the Ayers-Dainty algorithm with a single processed frame is not obvious in the general case; therefore it is useful to have confidence in its convergence in a multiframe case. The dependence of convergence on the number of processed frames is discussed. Formulas for evaluating the quality of the algorithm performance on each iteration and the rule of stopping its work in accordance with this quality are proposed. A method of building the monotonically converging subsequence of the image estimates of all the images obtained in the iterative process is also proposed.
Collapse
|
35
|
Bar L, Sochen N, Kiryati N. Semi-blind image restoration via Mumford-Shah regularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:483-93. [PMID: 16479818 DOI: 10.1109/tip.2005.863120] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Image restoration and segmentation are both classical problems, that are known to be difficult and have attracted major research efforts. This paper shows that the two problems are tightly coupled and can be successfully solved together. Mutual support of image restoration and segmentation processes within a joint variational framework is theoretically motivated, and validated by successful experimental results. The proposed variational method integrates semi-blind image deconvolution (parametric blur-kernel), and Mumford-Shah segmentation. The functional is formulated using the T-convergence approximation and is iteratively optimized via the alternate minimization method. While the major novelty of this work is in the unified treatment of the semi-blind restoration and segmentation problems, the important special case of known blur is also considered and promising results are obtained.
Collapse
Affiliation(s)
- Leah Bar
- School of Electrical Engineering, Tel Aviv University, Tel Aviv 69978, Israel
| | | | | |
Collapse
|
36
|
Slavine NV, Lewis MA, Richer E, Antich PP. Iterative reconstruction method for light emitting sources based on the diffusion equation. Med Phys 2005; 33:61-8. [PMID: 16485410 DOI: 10.1118/1.2138007] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Bioluminescent imaging (BLI) of luciferase-expressing cells in live small animals is a powerful technique for investigating tumor growth, metastasis, and specific biological molecular events. Three-dimensional imaging would greatly enhance applications in biomedicine since light emitting cell populations could be unambiguously associated with specific organs or tissues. Any imaging approach must account for the main optical properties of biological tissue because light emission from a distribution of sources at depth is strongly attenuated due to optical absorption and scattering in tissue. Our image reconstruction method for interior sources is based on the deblurring expectation maximization method and takes into account both of these effects. To determine the boundary of the object we use the standard iterative algorithm-maximum likelihood reconstruction method with an external source of diffuse light. Depth-dependent corrections were included in the reconstruction procedure to obtain a quantitative measure of light intensity by using the diffusion equation for light transport in semi-infinite turbid media with extrapolated boundary conditions.
Collapse
Affiliation(s)
- Nikolai V Slavine
- Advanced Radiological Sciences, Department of Radiology, University of Texas Southwestern Medical Center, 5323 Harry Hines Boulevard, Dallas, Texas 75390-9058, USA.
| | | | | | | |
Collapse
|
37
|
Joshi MV, Chaudhuri S. Joint blind restoration and surface recovery in photometric stereo. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2005; 22:1066-76. [PMID: 15984479 DOI: 10.1364/josaa.22.001066] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
We address the problem of simultaneous estimation of scene structure and restoration of images from blurred photometric measurements. In photometric stereo, the structure of an object is determined by using a particular reflectance model (the image irradiance equation) without considering the blurring effect. What we show is that, given arbitrarily blurred observations of a static scene captured with a stationary camera under different illuminant directions, we still can obtain the structure represented by the surface gradients and the albedo and also perform a blind image restoration. The surface gradients and the albedo are modeled as separate Markov random fields, and a suitable regularization scheme is used to estimate the different fields as well as the blur parameter. The results of the experimentations are illustrated with real as well as synthetic images.
Collapse
Affiliation(s)
- Manjunath V Joshi
- Department of Electrical Engineering, Indian Institute of Technology-Bombay, Mumbai 400076, India
| | | |
Collapse
|
38
|
Chen L, Yap KH. A soft double regularization approach to parametric blind image deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:624-33. [PMID: 15887557 DOI: 10.1109/tip.2005.846024] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/02/2023]
Abstract
This paper proposes a blind image deconvolution scheme based on soft integration of parametric blur structures. Conventional blind image deconvolution methods encounter a difficult dilemma of either imposing stringent and inflexible preconditions on the problem formulation or experiencing poor restoration results due to lack of information. This paper attempts to address this issue by assessing the relevance of parametric blur information, and incorporating the knowledge into the parametric double regularization (PDR) scheme. The PDR method assumes that the actual blur satisfies up to a certain degree of parametric structure, as there are many well-known parametric blurs in practical applications. Further, it can be tailored flexibly to include other blur types if some prior parametric knowledge of the blur is available. A manifold soft parametric modeling technique is proposed to generate the blur manifolds, and estimate the fuzzy blur structure. The PDR scheme involves the development of the meaningful cost function, the estimation of blur support and structure, and the optimization of the cost function. Experimental results show that it is effective in restoring degraded images under different environments.
Collapse
Affiliation(s)
- Li Chen
- Media Technology Laboratory, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798.
| | | |
Collapse
|
39
|
Mugnier LM, Fusco T, Conan JM. MISTRAL: a myopic edge-preserving image restoration method, with application to astronomical adaptive-optics-corrected long-exposure images. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2004; 21:1841-1854. [PMID: 15497412 DOI: 10.1364/josaa.21.001841] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Deconvolution is a necessary tool for the exploitation of a number of imaging instruments. We describe a deconvolution method developed in a Bayesian framework in the context of imaging through turbulence with adaptive optics. This method uses a noise model that accounts for both photonic and detector noises. It additionally contains a regularization term that is appropriate for objects that are a mix of sharp edges and smooth areas. Finally, it reckons with an imperfect knowledge of the point-spread function (PSF) by estimating the PSF jointly with the object under soft constraints rather than blindly (i.e., without constraints). These constraints are designed to embody our knowledge of the PSF. The implementation of this method is called MISTRAL. It is validated by simulations, and its effectiveness is illustrated by deconvolution results on experimental data taken on various adaptive optics systems and telescopes. Some of these deconvolutions have already been used to derive published astrophysical interpretations.
Collapse
Affiliation(s)
- Laurent M Mugnier
- Office National d'Etudes et de Recherches Aérospatiales, Optics Department, B.P. 72, F-92322 Châtillon cedex, France.
| | | | | |
Collapse
|
40
|
Bar L, Sochen N, Kiryati N. Variational Pairing of Image Segmentation and Blind Restoration. LECTURE NOTES IN COMPUTER SCIENCE 2004. [DOI: 10.1007/978-3-540-24671-8_13] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
41
|
Jiang M, Wang G, Skinner MW, Rubinstein JT, Vannier MW. Blind deblurring of spiral CT images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2003; 22:837-845. [PMID: 12906237 DOI: 10.1109/tmi.2003.815075] [Citation(s) in RCA: 20] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
To discriminate fine anatomical features in the inner ear, it has been desirable that spiral computed tomography (CT) may perform beyond their current resolution limits with the aid of digital image processing techniques. In this paper, we develop a blind deblurring approach to enhance image resolution retrospectively without complete knowledge of the underlying point spread function (PSF). An oblique CT image can be approximated as the convolution of an isotropic Gaussian PSF and the actual cross section. Practically, the parameter of the PSF is often unavailable. Hence, estimation of the parameter for the underlying PSF is crucially important for blind image deblurring. Based on the iterative deblurring theory, we formulate an edge-to-noise ratio (ENR) to characterize the image quality change due to deblurring. Our blind deblurring algorithm estimates the parameter of the PSF by maximizing the ENR, and deblurs images. In the phantom studies, the blind deblurring algorithm reduces image blurring by about 24%, according to our blurring residual measure. Also, the blind deblurring algorithm works well in patient studies. After fully automatic blind deblurring, the conspicuity of the submillimeter features of the cochlea is substantially improved.
Collapse
Affiliation(s)
- Ming Jiang
- Department of Radiology, University of Iowa, Iowa City 52242, USA.
| | | | | | | | | |
Collapse
|
42
|
Li TH, Lii KS. A joint estimation approach for two-tone image deblurring by blind deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:847-858. [PMID: 18244679 DOI: 10.1109/tip.2002.801127] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A new statistical method is proposed for deblurring two-tone images, i.e., images with two unknown grey levels, that are blurred by an unknown linear filter. The key idea of the proposed method is to adjust a deblurring filter until its output becomes two tone. Two optimization criteria are proposed for the adjustment of the deblurring filter. A three-step iterative algorithm (TSIA) is also proposed to minimize the criteria. It is proven mathematically that by minimizing either of the criteria, the original (nonblurred) image, along with the blur filter, will be recovered uniquely (only with possible scale/shift ambiguities) at high SNR. The recovery is guaranteed not only for i.i.d. images but also for correlated and nonstationary images. It does not require a priori knowledge of the statistical parameters or the tone values of the original image; neither does it require a priori knowledge of the phase or other special information (e.g., FIR, symmetry, nonnegativity, etc.) about the blur filter. Numerical experiments are carried out to test the method on synthetic and real images.
Collapse
Affiliation(s)
- Ta-Hsin Li
- Dept. of Stat. and Appl. Probability, California Univ., Santa Barbara, CA 93106, USA.
| | | |
Collapse
|
43
|
Panchapakesan K, Sheppard DG, Marcellin MW, Hunt BR. Blur identification from vector quantizer encoder distortion. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:465-470. [PMID: 18249635 DOI: 10.1109/83.908524] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Blur identification is a crucial first step in many image restoration techniques. An approach for identifying image blur using vector quantizer encoder distortion is proposed. The blur in an image is identified by choosing from a finite set of candidate blur functions. The method requires a set of training images produced by each of the blur candidates. Each of these sets is used to train a vector quantizer codebook. Given an image degraded by unknown blur, it is first encoded with each of these codebooks. The blur in the image is then estimated by choosing from among the candidates, the one corresponding to the codebook that provides the lowest encoder distortion. Simulations are performed at various bit rates and with different levels of noise. Results show that the method performs well even at a signal-to-noise ratio (SNR) as low as 10 dB.
Collapse
|
44
|
Lam EY, Goodman JW. Iterative statistical approach to blind image deconvolution. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2000; 17:1177-1184. [PMID: 10883969 DOI: 10.1364/josaa.17.001177] [Citation(s) in RCA: 22] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Image deblurring has long been modeled as a deconvolution problem. In the literature, the point-spread function (PSF) is often assumed to be known exactly. However, in practical situations such as image acquisition in cameras, we may have incomplete knowledge of the PSF. This deblurring problem is referred to as blind deconvolution. We employ a statistical point of view of the data and use a modified maximum a posteriori approach to identify the most probable object and blur given the observed image. To facilitate computation we use an iterative method, which is an extension of the traditional expectation-maximization method, instead of direct optimization. We derive separate formulas for the updates of the estimates in each iteration to enhance the deconvolution results, which are based on the specific nature of our a priori knowledge available about the object and the blur.
Collapse
Affiliation(s)
- EY Lam
- Department of Electrical Engineering, Stanford Univesity, California 94305, USA
| | | |
Collapse
|
45
|
Abstract
Thanks to its ability to yield functionally rather than anatomically-based information, the three-dimensional (3-D) SPECT imagery technique has become a great help in the diagnostic of cerebrovascular diseases. Nevertheless, due to the imaging process, the 3-D single photon emission computed tomography (SPECT) images are very blurred and, consequently, their interpretation by the clinician is often difficult and subjective. In order to improve the resolution of these 3-D images and then to facilitate their interpretation, we propose herein to extend a recent image blind deconvolution technique (called the nonnegativity support constraint-recursive inverse filtering deconvolution method) in order to improve both the spatial and the interslice resolution of SPECT volumes. This technique requires a preliminary step in order to find the support of the object to be restored. In this paper, we propose to solve this problem with an unsupervised 3-D Markovian segmentation technique. This method has been successfully tested on numerous real and simulated brain SPECT volumes, yielding very promising restoration results.
Collapse
Affiliation(s)
- M Mignotte
- Institut National de Recherche en Informatique et Automatique, France.
| | | |
Collapse
|
46
|
Giannakis GB, Heath RR. Blind identification of multichannel FIR blurs and perfect image restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:1877-1896. [PMID: 18262924 DOI: 10.1109/83.877210] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Despite its practical importance in image processing and computer vision, blind blur identification and blind image restoration have so far been addressed under restrictive assumptions such as all-pole stationary image models blurred by zero or minimum-phase point-spread functions. Relying upon diversity (availability of a sufficient number of multiple blurred images), we develop blind FIR blur identification and order determination schemes. Apart from a minimal persistence of the excitation condition (also present with nonblind setups), the inaccessible input image is allowed to be deterministic or random and of unknown color of distribution. With the blurs satisfying a certain co-primeness condition in addition, we establish existence and uniqueness results which guarantee that single input/multiple-output FIR blurred images can be restored blindly, though perfectly in the absence of noise, using linear FIR filters. Results of simulations employing the blind order determination, blind blur identification, and blind image restoration algorithms are presented. When the SNR is high, direct image restoration is found to yield better results than indirect image restoration which employs the estimated blurs. In low SNR, indirect image restoration performs well while the direct restoration results vary with the delay but improve with larger equalizer orders.
Collapse
Affiliation(s)
- G B Giannakis
- Dept. of Electr. and Comput. Eng., Minnesota Univ., Minneapolis, MN 55455, USA.
| | | |
Collapse
|
47
|
Ng MK, Plemmons RJ, Qiao S. Regularization of RIF blind image deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:1130-1134. [PMID: 18255482 DOI: 10.1109/83.846254] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Blind image restoration is the process of estimating both the true image and the blur from the degraded image, using only partial information about degradation sources and the imaging system. Our main interest concerns optical image enhancement, where the degradation often involves a convolution process. We provide a method to incorporate truncated eigenvalue and total variation regularization into a nonlinear recursive inverse filter (RIF) blind deconvolution scheme first proposed by Kundar, and by Kundur and Hatzinakos. Tests are reported on simulated and optical imaging problems.
Collapse
|
48
|
You YL, Kaveh M. Blind image restoration by anisotropic regularization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:396-407. [PMID: 18262882 DOI: 10.1109/83.748894] [Citation(s) in RCA: 19] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper presents anisotropic regularization techniques to exploit the piecewise smoothness of the image and the point spread function (PSF) in order to mitigate the severe lack of information encountered in blind restoration of shift-invariantly and shift-variantly blurred images. The new techniques, which are derived from anisotropic diffusion, adapt both the degree and direction of regularization to the spatial activities and orientations of the image and the PSF. This matches the piecewise smoothness of the image and the PSF which may be characterized by sharp transitions in magnitude and by the anisotropic nature of these transitions. For shift-variantly blurred images whose underlying PSFs may differ from one pixel to another, we parameterize the PSF and then apply the anisotropic regularization techniques. This is demonstrated for linear motion blur and out-of-focus blur. Alternating minimization is used to reduce the computational load and algorithmic complexity.
Collapse
Affiliation(s)
- Y L You
- Digital Theater Systems, Inc., Agoura Hills, CA 91301-4523, USA.
| | | |
Collapse
|
49
|
Conan JM, Mugnier LM, Fusco T, Michau V, Rousset G. Myopic deconvolution of adaptive optics images by use of object and point-spread function power spectra. APPLIED OPTICS 1998; 37:4614-4622. [PMID: 18285917 DOI: 10.1364/ao.37.004614] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Adaptive optics systems provide a real-time compensation for atmospheric turbulence. However, the correction is often only partial, and a deconvolution is required for reaching the diffraction limit. The need for a regularized deconvolution is discussed, and such a deconvolution technique is presented. This technique incorporates a positivity constraint and some a priori knowledge of the object (an estimate of its local mean and a model for its power spectral density). This method is then extended to the case of an unknown point-spread function, still taking advantage of similar a priori information on the point-spread function. Deconvolution results are presented for both simulated and experimental data.
Collapse
|
50
|
Alam MS, Bognar JG, Cain S, Yasuda BJ. Fast registration and reconstruction of aliased low-resolution frames by use of a modified maximum-likelihood approach. APPLIED OPTICS 1998; 37:1319-1328. [PMID: 18268719 DOI: 10.1364/ao.37.001319] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
During the process of microscanning a controlled vibrating mirror typically is used to produce subpixel shifts in a sequence of forward-looking infrared (FLIR) images. If the FLIR is mounted on a moving platform, such as an aircraft, uncontrolled random vibrations associated with the platform can be used to generate the shifts. Iterative techniques such as the expectation-maximization (EM) approach by means of the maximum-likelihood algorithm can be used to generate high-resolution images from multiple randomly shifted aliased frames. In the maximum-likelihood approach the data are considered to be Poisson random variables and an EM algorithm is developed that iteratively estimates an unaliased image that is compensated for known imager-system blur while it simultaneously estimates the translational shifts. Although this algorithm yields high-resolution images from a sequence of randomly shifted frames, it requires significant computation time and cannot be implemented for real-time applications that use the currently available high-performance processors. The new image shifts are iteratively calculated by evaluation of a cost function that compares the shifted and interlaced data frames with the corresponding values in the algorithm's latest estimate of the high-resolution image. We present a registration algorithm that estimates the shifts in one step. The shift parameters provided by the new algorithm are accurate enough to eliminate the need for iterative recalculation of translational shifts. Using this shift information, we apply a simplified version of the EM algorithm to estimate a high-resolution image from a given sequence of video frames. The proposed modified EM algorithm has been found to reduce significantly the computational burden when compared with the original EM algorithm, thus making it more attractive for practical implementation. Both simulation and experimental results are presented to verify the effectiveness of the proposed technique.
Collapse
|