51
|
Li S, Qin B, Xiao J, Liu Q, Wang Y, Liang D. Multi-Channel and Multi-Model-Based Autoencoding Prior for Grayscale Image Restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 29:142-156. [PMID: 31380761 DOI: 10.1109/tip.2019.2931240] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Image restoration (IR) is a long-standing challenging problem in low-level image processing. It is of utmost importance to learn good image priors for pursuing visually pleasing results. In this paper, we develop a multi-channel and multi-model-based denoising autoencoder network as image prior for solving IR problem. Specifically, the network that trained on RGB-channel images is used to construct a prior at first, and then the learned prior is incorporated into single-channel grayscale IR tasks. To achieve the goal, we employ the auxiliary variable technique to integrate the higher-dimensional network-driven prior information into the iterative restoration procedure. In addition, according to the weighted aggregation idea, a multi-model strategy is put forward to enhance the network stability that favors to avoid getting trapped in local optima. Extensive experiments on image deblurring and deblocking tasks show that the proposed algorithm is efficient, robust, and yields state-of-the-art restoration quality on grayscale images.
Collapse
|
52
|
Huang H, Nie G, Zheng Y, Fu Y. Image restoration from patch-based compressed sensing measurement. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.02.036] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
53
|
Liu X, Cheung G, Ji X, Zhao D, Gao W. Graph-Based Joint Dequantization and Contrast Enhancement of Poorly Lit JPEG Images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:1205-1219. [PMID: 30281452 DOI: 10.1109/tip.2018.2872871] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
JPEG images captured in poor lighting conditions suffer from both low luminance contrast and coarse quantization artifacts due to lossy compression. Performing dequantization and contrast enhancement in separate back-to-back steps would amplify the residual compression artifacts, resulting in low visual quality. Leveraging on recent development in graph signal processing (GSP), we propose to jointly dequantize and contrast-enhance such images in a single graph-signal restoration framework. Specifically, we separate each observed pixel patch into illumination and reflectance via Retinex theory, where we define generalized smoothness prior and signed graph smoothness prior according to their respective unique signal characteristics. Given only a transform-coded image patch, we compute robust edge weights for each graph via low-pass filtering in the dual graph domain. We compute the illumination and reflectance components for each patch alternately, adopting accelerated proximal gradient (APG) algorithms in the transform domain, with backtracking line search for further speedup. Experimental results show that our generated images outperform the state-of-the-art schemes noticeably in the subjective quality evaluation.
Collapse
|
54
|
Motion Estimation-Assisted Denoising for an Efficient Combination with an HEVC Encoder. SENSORS 2019; 19:s19040895. [PMID: 30795517 PMCID: PMC6412397 DOI: 10.3390/s19040895] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Revised: 02/15/2019] [Accepted: 02/19/2019] [Indexed: 11/22/2022]
Abstract
Noise, which is commonly generated in low-light environments or by low-performance cameras, is a major cause of the degradation of compression efficiency. In previous studies that attempted to combine a denoise algorithm and a video encoder, denoising was used independently of the code for pre-processing or post-processing. However, this process must be tightly coupled with encoding because noise affects the compression efficiency greatly. In addition, this represents a major opportunity to reduce the computational complexity, because the encoding process and a denoise algorithm have many similarities. In this paper, a simple, add-on denoising scheme is proposed through a combination of high-efficiency video coding (HEVC) and block matching three-dimensional collaborative filtering (BM3D) algorithms. It is known that BM3D has excellent denoise performance but that it is limited in its use due to its high computational complexity. This paper employs motion estimation in HEVC to replace the block matching of BM3D so that most of the time-consuming functions are shared. To overcome the challenging algorithmic differences, the hierarchical structure in HEVC is uniquely utilized. As a result, the computational complexity is drastically reduced while the competitive performance capabilities in terms of coding efficiency and denoising quality are maintained.
Collapse
|
55
|
Kumar A, Ahmad MO, Swamy MNS. Tchebichef and Adaptive Steerable Based Total Variation Model for Image Denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:2921-2935. [PMID: 30668499 DOI: 10.1109/tip.2019.2892663] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Structural information, in particular, the edges present in an image are the most important part that get noticed by human eyes. Therefore, it is important to denoise this information effectively for better visualization. Recently, research work has been carried out to characterize the structural information into plain and edge patches and denoise them separately. However, the information about the geometrical orientation of the edges are not considered leading to sub-optimal denoising results. This has motivated us to introduce in this paper an adaptive steerable total variation regularizer (ASTV) based on geometric moments. The proposed ASTV regularizer is capable of denoising the edges based on their geometrical orientation, thus boosting the denoising performance. Further, earlier works exploited the sparsity of the natural images in DCT and wavelet domains which help in improving the denoising performance. Based on this observation, we introduce the sparsity of an image in orthogonal moment domain, in particular, the Tchebichef moment. Then, we propose a new sparse regularizer, which is a combination of the Tchebichef moment and ASTVbased regularizers. The overall denoising framework is optimized using split Bregman-based multivariable minimization technique. Experimental results demonstrate the competitiveness of the proposed method with the existing ones in terms of both the objective and subjective image qualities.
Collapse
|
56
|
Compressive sensing image recovery using dictionary learning and shape-adaptive DCT thresholding. Magn Reson Imaging 2019; 55:60-71. [DOI: 10.1016/j.mri.2018.09.014] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 09/13/2018] [Accepted: 09/16/2018] [Indexed: 11/22/2022]
|
57
|
Young SI, Naman AT, Taubman D. COGL: Coefficient Graph Laplacians for Optimized JPEG Image Decoding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2019; 28:343-355. [PMID: 30176592 DOI: 10.1109/tip.2018.2867943] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
We address the problem of decoding joint photographic experts group (JPEG)-encoded images with less visual artifacts. We view the decoding task as an ill-posed inverse problem and find a regularized solution using a convex, graph Laplacian-regularized model. Since the resulting problem is non-smooth and entails non-local regularization, we use fast high-dimensional Gaussian filtering techniques with the proximal gradient descent method to solve our convex problem efficiently. Our patch-based "coefficient graph" is better suited than the traditional pixel-based ones for regularizing smooth non-stationary signals such as natural images and relates directly to classic non-local means de-noising of images. We also extend our graph along the temporal dimension to handle the decoding of M-JPEG-encoded video. Despite the minimalistic nature of our convex problem, it produces decoded images with similar quality to other more complex, state-of-the-art methods while being up to five times faster. We also expound on the relationship between our method and the classic ANCE method, reinterpreting ANCE from a graph-based regularization perspective.
Collapse
|
58
|
Sur F. A Non-Local Dual-Domain Approach to Cartoon and Texture Decomposition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 28:1882-1894. [PMID: 30452365 DOI: 10.1109/tip.2018.2881906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
This paper addresses the problem of cartoon and texture decomposition. Microtextures being characterized by their power spectrum, we propose to extract cartoon and texture components from the information provided by the power spectrum of image patches. The contribution of texture to the spectrum of a patch is detected as statistically significant spectral components with respect to a null hypothesis modeling the power spectrum of a non-textured patch. The null-hypothesis model is built upon a coarse cartoon representation obtained by a basic yet fast filtering algorithm of the literature. Hence the term "dual domain": the coarse decomposition is obtained in the spatial domain and is an input of the proposed spectral approach. The statistical model is also built upon the power spectrum of patches with similar textures across the image. The proposed approach therefore falls within the family of non-local methods. Experimental results are shown in various application areas, including canvas pattern removal in fine arts painting, or periodic noise removal in remote sensing imaging.
Collapse
|
59
|
Clustering-based natural image denoising using dictionary learning approach in wavelet domain. Soft comput 2018. [DOI: 10.1007/s00500-018-3438-9] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
60
|
Zhang Y, Sun L, Yan C, Ji X, Dai Q. Adaptive Residual Networks for High-Quality Image Restoration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:3150-3163. [PMID: 29641397 DOI: 10.1109/tip.2018.2812081] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Image restoration methods based on convolutional neural networks have shown great success in the literature. However, since most of networks are not deep enough, there is still some room for the performance improvement. On the other hand, though some models are deep and introduce shortcuts for easy training, they ignore the importance of location and scaling of different inputs within the shortcuts. As a result, existing networks can only handle one specific image restoration application. To address such problems, we propose a novel adaptive residual network (ARN) for high-quality image restoration in this paper. Our ARN is a deep residual network, which is composed of convolutional layers, parametric rectified linear unit layers, and some adaptive shortcuts. We assign different scaling parameters to different inputs of the shortcuts, where the scaling is considered as part parameters of the ARN and trained adaptively according to different applications. Due to the special construction of ARN, it can solve many image restoration problems and have superior performance. We demonstrate its capabilities with three representative applications, including Gaussian image denoising, single image super resolution, and JPEG image deblocking. Experimental results prove that our model greatly outperforms numerous state-of-the-art restoration methods in terms of both peak signal-to-noise ratio and structure similarity index metrics, e.g., it achieves 0.2-0.3 dB gain in average compared with the second best method at a wide range of situations.
Collapse
|
61
|
Wu Q, Li H, Meng F, Ngan KN. A Perceptually Weighted Rank Correlation Indicator for Objective Image Quality Assessment. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2499-2513. [PMID: 29994353 DOI: 10.1109/tip.2018.2799331] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In the field of objective image quality assessment (IQA), Spearman's ρ and Kendall's τ, which straightforwardly assign uniform weights to all quality levels and assume that each pair of images is sortable, are the two most popular rank correlation indicators. These indicators can successfully measure the average accuracy of an IQA metric for ranking multiple processed images. However, two important perceptual properties are ignored. First, the sorting accuracy (SA) of high-quality images is usually more important than that of poor-quality images in many real-world applications, where only top-ranked images are pushed to the users. Second, due to the subjective uncertainty in making judgments, two perceptually similar images are usually barely sortable, and their ranks do not contribute to the evaluation of an IQA metric. To more accurately compare different IQA algorithms, in this paper, we explore a perceptually weighted rank correlation indicator, which rewards the capability of correctly ranking high-quality images and suppresses the attention towards insensitive rank mistakes. Specifically, we focus on activating a 'valid' pairwise comparison of images whose quality difference exceeds a given sensory threshold (ST). Meanwhile, each image pair is assigned a unique weight that is determined by both the quality level and rank deviation. By modifying the perception threshold, we can illustrate the sorting accuracy with a sophisticated SA-ST curve rather than a single rank correlation coefficient. The proposed indicator offers new insight into interpreting visual perception behavior. Furthermore, the applicability of our indicator is validated for recommending robust IQA metrics for both degraded and enhanced image data.
Collapse
|
62
|
|
63
|
Anwar S, Porikli F, Huynh CP. Category-Specific Object Image Denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:5506-5518. [PMID: 28767371 DOI: 10.1109/tip.2017.2733739] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a novel image denoising algorithm that uses external, category specific image database. In contrast to existing noisy image restoration algorithms that search patches either from a generic database or noisy image itself, our method first selects clean images similar to the noisy image from a database that consists of images of the same class. Then, within the spatial locality of each noisy patch, it assembles a set of "support patches" from the selected images. These noisy-free support samples resemble the noisy patch and correspond principally to the identical part of the depicted object. In addition, we employ a content adaptive distribution model for each patch, where we derive the parameters of the distribution from the support patches. We formulate noise removal task as an optimization problem in the transform domain. Our objective function composed of a Gaussian fidelity term that imposes category specific information, and a low-rank term that encourages the similarity between the noisy and the support patches in a robust manner. The denoising process is driven by an iterative selection of support patches and optimization of the objective function. Our extensive experiments on five different object categories confirm the benefit of incorporating category-specific information to noise removal and demonstrate the superior performance of our method over the state-of-the-art alternatives.
Collapse
|
64
|
Koyuncu H, Ceylan R. Elimination of white Gaussian noise in arterial phase CT images to bring adrenal tumours into the forefront. Comput Med Imaging Graph 2017; 65:46-57. [PMID: 28599916 DOI: 10.1016/j.compmedimag.2017.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2016] [Revised: 05/04/2017] [Accepted: 05/22/2017] [Indexed: 10/19/2022]
Abstract
Dynamic Contrast-Enhanced Computed Tomography (DCE-CT) is applied to observe adrenal tumours in detail by utilising from the contrast matter, which generally brings the tumour into the forefront. However, DCE-CT images are generally influenced by noises that occur as the result of the trade-off between radiation doses vs. noise. Herein, this situation constitutes a challenge in the achievement of accurate tumour segmentation. In CT images, most of the noises are similar to Gaussian Noise. In this study, arterial phase CT images containing adrenal tumours are utilised, and elimination of Gaussian Noise is realised by fourteen different techniques reported in literature for the achievement of the best denoising process. In this study, the Block Matching and 3D Filtering (BM3D) algorithm typically achieve reliable Peak Signal-to-Noise Ratios (PSNR) and resolves challenges of similar techniques when addressing different levels of noise. Furthermore, BM3D obtains the best mean PSNR values among the first five techniques. BM3D outperforms to other techniques by obtaining better Total Statistical Success (TSS), CPU time and computation cost. Consequently, it prepares clearer arterial phase CT images for the next step (segmentation of adrenal tumours).
Collapse
Affiliation(s)
- Hasan Koyuncu
- Selçuk University, Electrical & Electronics Engineering, Konya, Turkey.
| | - Rahime Ceylan
- Selçuk University, Electrical & Electronics Engineering, Konya, Turkey.
| |
Collapse
|
65
|
Chen Y, Pock T. Trainable Nonlinear Reaction Diffusion: A Flexible Framework for Fast and Effective Image Restoration. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2017; 39:1256-1272. [PMID: 27529868 DOI: 10.1109/tpami.2016.2596743] [Citation(s) in RCA: 215] [Impact Index Per Article: 26.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Image restoration is a long-standing problem in low-level computer vision with many interesting applications. We describe a flexible learning framework based on the concept of nonlinear reaction diffusion models for various image restoration problems. By embodying recent improvements in nonlinear diffusion models, we propose a dynamic nonlinear reaction diffusion model with time-dependent parameters (i.e., linear filters and influence functions). In contrast to previous nonlinear diffusion models, all the parameters, including the filters and the influence functions, are simultaneously learned from training data through a loss based approach. We call this approach TNRD-Trainable Nonlinear Reaction Diffusion. The TNRD approach is applicable for a variety of image restoration tasks by incorporating appropriate reaction force. We demonstrate its capabilities with three representative applications, Gaussian image denoising, single image super resolution and JPEG deblocking. Experiments show that our trained nonlinear diffusion models largely benefit from the training of the parameters and finally lead to the best reported performance on common test datasets for the tested applications. Our trained models preserve the structural simplicity of diffusion models and take only a small number of diffusion steps, thus are highly efficient. Moreover, they are also well-suited for parallel computation on GPUs, which makes the inference procedure extremely fast.
Collapse
|
66
|
Guo X, Li Y, Ling H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:982-993. [PMID: 28113318 DOI: 10.1109/tip.2016.2639450] [Citation(s) in RCA: 243] [Impact Index Per Article: 30.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
When one captures images in low-light conditions, the images often suffer from low visibility. Besides degrading the visual aesthetics of images, this poor quality may also significantly degenerate the performance of many computer vision and multimedia algorithms that are primarily designed for high-quality inputs. In this paper, we propose a simple yet effective low-light image enhancement (LIME) method. More concretely, the illumination of each pixel is first estimated individually by finding the maximum value in R, G, and B channels. Furthermore, we refine the initial illumination map by imposing a structure prior on it, as the final illumination map. Having the well-constructed illumination map, the enhancement can be achieved accordingly. Experiments on a number of challenging low-light images are present to reveal the efficacy of our LIME and show its superiority over several state-of-the-arts in terms of enhancement quality and efficiency.
Collapse
|
67
|
|
68
|
|
69
|
Seršić D, Sović Kržić A, Menoni CS. Relative intersection of confidence intervals rule for sharper restoration of soft x-ray images. APPLIED OPTICS 2016; 55:8932-8937. [PMID: 27828295 DOI: 10.1364/ao.55.008932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
We present a novel method for restoration of images of nanostructures obtained with a soft-ray microscope that uses a 46.9 nm soft x-ray laser microscope for illumination. To suppress the noise and to preserve the image sharpness, we develop a method based on pixel adaptive zero-order modeling of the observed object. Neighboring areas of each pixel are selected using the relative intersection of confidence intervals rule and used for restoration. Due to the non-uniform distribution of noise in the images, we use robust spatial noise modeling. The method provides sharp restored images-sharper than competitive approaches. The sharpness is measured using local phase coherence in the complex wavelet transform domain and shows visible improvement of the novel method.
Collapse
|
70
|
Non-local MRI denoising using random sampling. Magn Reson Imaging 2016; 34:990-9. [DOI: 10.1016/j.mri.2016.04.008] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2015] [Revised: 04/06/2016] [Accepted: 04/17/2016] [Indexed: 11/19/2022]
|
71
|
Peng J, Zhou J, Wu X. Dual-domain denoising in three dimensional magnetic resonance imaging. Exp Ther Med 2016; 12:653-660. [PMID: 27446257 PMCID: PMC4950751 DOI: 10.3892/etm.2016.3345] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Accepted: 04/27/2016] [Indexed: 11/16/2022] Open
Abstract
Denoising is a crucial preprocessing procedure for three dimensional magnetic resonance imaging (3D MRI). Existing denoising methods are predominantly implemented in a single domain, ignoring information in other domains. However, denoising methods are becoming increasingly complex, making analysis and implementation challenging. The present study aimed to develop a dual-domain image denoising (DDID) algorithm for 3D MRI that encapsulates information from the spatial and transform domains. In the present study, the DDID method was used to distinguish signal from noise in the spatial and frequency domains, after which robust accurate noise estimation was introduced for iterative filtering, which is simple and beneficial for computation. In addition, the proposed method was compared quantitatively and qualitatively with existing methods for synthetic and in vivo MRI datasets. The results of the present study suggested that the novel DDID algorithm performed well and provided competitive results, as compared with existing MRI denoising filters.
Collapse
|
72
|
Dar Y, Bruckstein AM, Elad M, Giryes R. Postprocessing of Compressed Images via Sequential Denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:3044-58. [PMID: 27214878 DOI: 10.1109/tip.2016.2558825] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
In this paper, we propose a novel postprocessing technique for compression-artifact reduction. Our approach is based on posing this task as an inverse problem, with a regularization that leverages on existing state-of-the-art image denoising algorithms. We rely on the recently proposed Plug-and-Play Prior framework, suggesting the solution of general inverse problems via alternating direction method of multipliers, leading to a sequence of Gaussian denoising steps. A key feature in our scheme is a linearization of the compression-decompression process, so as to get a formulation that can be optimized. In addition, we supply a thorough analysis of this linear approximation for several basic compression procedures. The proposed method is suitable for diverse compression techniques that rely on transform coding. In particular, we demonstrate impressive gains in image quality for several leading compression methods-JPEG, JPEG2000, and HEVC.
Collapse
|
73
|
Zhang J, Xiong R, Zhao C, Zhang Y, Ma S, Gao W. CONCOLOR: Constrained Non-Convex Low-Rank Model for Image Deblocking. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2016; 25:1246-1259. [PMID: 26761774 DOI: 10.1109/tip.2016.2515985] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Due to independent and coarse quantization of transform coefficients in each block, block-based transform coding usually introduces visually annoying blocking artifacts at low bitrates, which greatly prevents further bit reduction. To alleviate the conflict between bit reduction and quality preservation, deblocking as a post-processing strategy is an attractive and promising solution without changing existing codec. In this paper, in order to reduce blocking artifacts and obtain high-quality image, image deblocking is formulated as an optimization problem within maximum a posteriori framework, and a novel algorithm for image deblocking using constrained non-convex low-rank model is proposed. The ℓ(p) (0 < p < 1) penalty function is extended on singular values of a matrix to characterize low-rank prior model rather than the nuclear norm, while the quantization constraint is explicitly transformed into the feasible solution space to constrain the non-convex low-rank optimization. Moreover, a new quantization noise model is developed, and an alternatively minimizing strategy with adaptive parameter adjustment is developed to solve the proposed optimization problem. This parameter-free advantage enables the whole algorithm more attractive and practical. Experiments demonstrate that the proposed image deblocking algorithm outperforms the current state-of-the-art methods in both the objective quality and the perceptual quality.
Collapse
|
74
|
Li L, Zhou Y, Lin W, Wu J, Zhang X, Chen B. No-reference quality assessment of deblocked images. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.11.063] [Citation(s) in RCA: 64] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
75
|
Image Deblocking Scheme for JPEG Compressed Images Using an Adaptive-Weighted Bilateral Filter. JOURNAL OF INFORMATION PROCESSING SYSTEMS 2016. [DOI: 10.3745/jips.02.0046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
76
|
Zhang X, Xiong R, Lin W, Ma S, Liu J, Gao W. Video Compression Artifact Reduction via Spatio-Temporal Multi-Hypothesis Prediction. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:6048-6061. [PMID: 26441447 DOI: 10.1109/tip.2015.2485780] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Annoying compression artifacts exist in most of lossy coded videos at low bit rates, which are caused by coarse quantization of transform coefficients or motion compensation from distorted frames. In this paper, we propose a compression artifact reduction approach that utilizes both the spatial and the temporal correlation to form multi-hypothesis predictions from spatio-temporal similar blocks. For each transform block, three predictions with their reliabilities are estimated, respectively. The first prediction is constructed by inversely quantizing transform coefficients directly, and its reliability is determined by the variance of quantization noise. The second prediction is derived by representing each transform block with a temporal auto-regressive (TAR) model along its motion trajectory, and its corresponding reliability is estimated from local prediction errors of the TAR model. The last prediction infers the original coefficients from similar blocks in non-local regions, and its reliability is estimated based on the distribution of coefficients in these similar blocks. Finally, all the predictions are adaptively fused according to their reliabilities to restore high-quality videos. The experimental results show that the proposed method can efficiently reduce most of the compression artifacts and improve both subjective and objective quality of block transform coded videos.
Collapse
|
77
|
Hosotani F, Inuzuka Y, Hasegawa M, Hirobayashi S, Misawa T. Image Denoising With Edge-Preserving and Segmentation Based on Mask NHA. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:6025-6033. [PMID: 26513792 DOI: 10.1109/tip.2015.2494461] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In this paper, we propose a zero-mean white Gaussian noise removal method using a high-resolution frequency analysis. It is difficult to separate an original image component from a noise component when using discrete Fourier transform or discrete cosine transform for analysis because sidelobes occur in the results. The 2D non-harmonic analysis (2D NHA) is a high-resolution frequency analysis technique that improves noise removal accuracy because of its sidelobe reduction feature. However, spectra generated by NHA are distorted, because of which the signal of the image is non-stationary. In this paper, we analyze each region with a homogeneous texture in the noisy image. Non-uniform regions that occur due to segmentation are analyzed by an extended 2D NHA method called Mask NHA. We conducted an experiment using a simulation image, and found that Mask NHA denoising attains a higher peak signal-to-noise ratio (PSNR) value than the state-of-the-art methods if a suitable segmentation result can be obtained from the input image, even though parameter optimization was incomplete. This experimental result exhibits the upper limit on the value of PSNR in our Mask NHA denoising method. The performance of Mask NHA denoising is expected to approach the limit of PSNR by improving the segmentation method.
Collapse
|
78
|
Kang W, Yu S, Seo D, Jeong J, Paik J. Push-Broom-Type Very High-Resolution Satellite Sensor Data Correction Using Combined Wavelet-Fourier and Multiscale Non-Local Means Filtering. SENSORS 2015; 15:22826-53. [PMID: 26378532 PMCID: PMC4610582 DOI: 10.3390/s150922826] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/22/2015] [Revised: 07/28/2015] [Accepted: 09/01/2015] [Indexed: 11/22/2022]
Abstract
In very high-resolution (VHR) push-broom-type satellite sensor data, both destriping and denoising methods have become chronic problems and attracted major research advances in the remote sensing fields. Since the estimation of the original image from a noisy input is an ill-posed problem, a simple noise removal algorithm cannot preserve the radiometric integrity of satellite data. To solve these problems, we present a novel method to correct VHR data acquired by a push-broom-type sensor by combining wavelet-Fourier and multiscale non-local means (NLM) filters. After the wavelet-Fourier filter separates the stripe noise from the mixed noise in the wavelet low- and selected high-frequency sub-bands, random noise is removed using the multiscale NLM filter in both low- and high-frequency sub-bands without loss of image detail. The performance of the proposed method is compared to various existing methods on a set of push-broom-type sensor data acquired by Korean Multi-Purpose Satellite 3 (KOMPSAT-3) with severe stripe and random noise, and the results of the proposed method show significantly improved enhancement results over existing state-of-the-art methods in terms of both qualitative and quantitative assessments.
Collapse
Affiliation(s)
- Wonseok Kang
- Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea.
| | - Soohwan Yu
- Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea.
| | - Doochun Seo
- Department of Satellite Data Cal/Val Team, Korea Aerospace Research Institute, 115 Gwahangbo, Yusung-Gu, Daejeon 34133, Korea.
| | - Jaeheon Jeong
- Department of Satellite Data Cal/Val Team, Korea Aerospace Research Institute, 115 Gwahangbo, Yusung-Gu, Daejeon 34133, Korea.
| | - Joonki Paik
- Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea.
| |
Collapse
|
79
|
Kwon Y, Kim KI, Tompkin J, Kim JH, Theobalt C. Efficient Learning of Image Super-Resolution and Compression Artifact Removal with Semi-Local Gaussian Processes. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:1792-1805. [PMID: 26353127 DOI: 10.1109/tpami.2015.2389797] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Improving the quality of degraded images is a key problem in image processing, but the breadth of the problem leads to domain-specific approaches for tasks such as super-resolution and compression artifact removal. Recent approaches have shown that a general approach is possible by learning application-specific models from examples; however, learning models sophisticated enough to generate high-quality images is computationally expensive, and so specific per-application or per-dataset models are impractical. To solve this problem, we present an efficient semi-local approximation scheme to large-scale Gaussian processes. This allows efficient learning of task-specific image enhancements from example images without reducing quality. As such, our algorithm can be easily customized to specific applications and datasets, and we show the efficiency and effectiveness of our approach across five domains: single-image super-resolution for scene, human face, and text images, and artifact removal in JPEG- and JPEG 2000-encoded images.
Collapse
|
80
|
Son CH, Lee K, Choo H. Inverse color to black-and-white halftone conversion via dictionary learning and color mapping. Inf Sci (N Y) 2015. [DOI: 10.1016/j.ins.2014.12.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
81
|
Chierchia G, Pustelnik N, Pesquet-Popescu B, Pesquet JC. A nonlocal structure tensor-based approach for multicomponent image recovery problems. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5531-5544. [PMID: 25347882 DOI: 10.1109/tip.2014.2364141] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Nonlocal total variation (NLTV) has emerged as a useful tool in variational methods for image recovery problems. In this paper, we extend the NLTV-based regularization to multicomponent images by taking advantage of the structure tensor (ST) resulting from the gradient of a multicomponent image. The proposed approach allows us to penalize the nonlocal variations, jointly for the different components, through various l(1, p)-matrix-norms with p ≥ 1. To facilitate the choice of the hyperparameters, we adopt a constrained convex optimization approach in which we minimize the data fidelity term subject to a constraint involving the ST-NLTV regularization. The resulting convex optimization problem is solved with a novel epigraphical projection method. This formulation can be efficiently implemented because of the flexibility offered by recent primal-dual proximal algorithms. Experiments are carried out for color, multispectral, and hyperspectral images. The results demonstrate the interest of introducing a nonlocal ST regularization and show that the proposed approach leads to significant improvements in terms of convergence speed over current state-of-the-art methods, such as the alternating direction method of multipliers.
Collapse
|
82
|
Jung SW. Adaptive post-filtering of JPEG compressed images considering compressed domain lossless data hiding. Inf Sci (N Y) 2014. [DOI: 10.1016/j.ins.2014.05.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
83
|
Bhandari AK, Soni V, Kumar A, Singh GK. Cuckoo search algorithm based satellite image contrast and brightness enhancement using DWT-SVD. ISA TRANSACTIONS 2014; 53:1286-1296. [PMID: 24893835 DOI: 10.1016/j.isatra.2014.04.007] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/11/2013] [Revised: 01/13/2014] [Accepted: 04/28/2014] [Indexed: 06/03/2023]
Abstract
This paper presents a new contrast enhancement approach which is based on Cuckoo Search (CS) algorithm and DWT-SVD for quality improvement of the low contrast satellite images. The input image is decomposed into the four frequency subbands through Discrete Wavelet Transform (DWT), and CS algorithm used to optimize each subband of DWT and then obtains the singular value matrix of the low-low thresholded subband image and finally, it reconstructs the enhanced image by applying IDWT. The singular value matrix employed intensity information of the particular image, and any modification in the singular values changes the intensity of the given image. The experimental results show superiority of the proposed method performance in terms of PSNR, MSE, Mean and Standard Deviation over conventional and state-of-the-art techniques.
Collapse
Affiliation(s)
- A K Bhandari
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur 482011, MP, India
| | - V Soni
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur 482011, MP, India
| | - A Kumar
- PDPM Indian Institute of Information Technology Design and Manufacturing, Jabalpur 482011, MP, India
| | - G K Singh
- Department of Electrical Engineering, Indian Institute Technology Roorkee, Uttrakhand 247667, India
| |
Collapse
|
84
|
Seršić D, Sović A, Menoni CS. Restoration of soft x-ray laser images of nanostructures. OPTICS EXPRESS 2014; 22:13846-13859. [PMID: 24921576 DOI: 10.1364/oe.22.013846] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
We present advanced techniques for the restoration of images obtained by soft x-ray laser microscopy. We show two methods. One method is based on adaptive thresholding, while the other uses local Wiener filtering in the wavelet domain to achieve high noise gains. These wavelet based denoising techniques are improved using spatial noise modeling. The accurate noise model is built up from two consecutive images of the object and respective background images. To our knowledge, the results of both proposed approaches over-perform competitive methods. The analysis is robust to enable image acquisition with significantly lower exposure times, which is critical in samples that are sensitive to radiation damage as is the case of biological samples imaged by SXR microscopy.
Collapse
|
85
|
Son CH, Choo H. Local learned dictionaries optimized to edge orientation for inverse halftoning. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:2542-2556. [PMID: 24800685 DOI: 10.1109/tip.2014.2319732] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A method is proposed for fully restoring local image structures of an unknown continuous-tone patch from an input halftoned patch with homogenously distributed dot patterns, based on a locally learned dictionary pair via feature clustering. First, many training sets consisting of paired halftone and continuous-tone patches are collected, and then histogram-of- oriented-gradient (HOG) feature vectors that describe the edge orientations are calculated from every continuous-tone patch, to group the training sets. Next, a dictionary learning algorithm is separately conducted on the categorized training sets, to obtain the halftone and continuous-tone dictionary pairs, optimized to edge-oriented patch representation. Finally, an adaptively smoothing filter is applied to the input halftone patch, to predict the HOG feature vector of an unknown continuous-tone patch, and to select one of the previously learned dictionary pairs, based on the Euclidean distance between the HOG mean feature vectors of the grouped training sets and the predicted HOG vector. In addition to using the local dictionary pairs, a patch fusion technique is used to reduce some artifacts, such as color noise and overemphasized edges on smooth regions. Experimental results show that the use of the paired dictionary selected by the local edge orientation and patch fusion technique not only reduced the artifacts in smooth regions, but also provided well expressed fine details and outlines, especially in the areas of textures, lines, and regular patterns.
Collapse
|
86
|
Wu X, Liu S, Wu M, Sun H, Zhou J, Gong Q, Ding Z. Nonlocal denoising using anisotropic structure tensor for 3D MRI. Med Phys 2014; 40:101904. [PMID: 24089906 DOI: 10.1118/1.4820370] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE Noise in magnetic resonance imaging (MRI) data is widely recognized to be harmful to image processing and subsequent quantitative analysis. To ameliorate the effects of image noise, the authors present a structure-tensor based nonlocal mean (NLM) denoising technique that can effectively reduce noise in MRI data and improve tissue characterization. METHODS The proposed 3D NLM algorithm uses a structure tensor to characterize information around tissue boundaries. The similarity weight of a pixel (or patch), which determines its contribution to the denoising process, is determined by the intensity and structure tensor simultaneously. Meanwhile, similarity of structure tensors is computed using an affine-invariant Riemannian metrics, which compares tensor properties more comprehensively and avoids orientation inaccuracy of structure subsequently. The proposed method is further extended for denoising high dimensional MRI data such as diffusion weighted MRI. It is also extended to handle Rician noise corruption so that denoising effects are further enhanced. RESULTS The proposed method was implemented in both simulated datasets and multiply modalities of real 3D MRI datasets. Comparisons with related state-of-the-art algorithms demonstrated that this method improves denoising performance qualitatively and quantitatively. CONCLUSIONS In this paper, high order structure information of 3D MRI was characterized by 3D structure tensor and compared for NLM denoising in a Riemannian space. Experiments with simulated and real human MRI data demonstrate a great potential of the proposed technique for routine clinical use.
Collapse
Affiliation(s)
- Xi Wu
- Huaxi MR Research Center, Sichuan University, Chengdu, Sichuan 610065, China; College of Computer Science, Sichuan University, Chengdu, Sichuan 610065, China; and Vanderbilt University Institute of Imaging Science, Vanderbilt University, Nashville, Tennessee 37232-2310
| | | | | | | | | | | | | |
Collapse
|
87
|
Fu Z, Chan SC, Di X, Biswal B, Zhang Z. Adaptive covariance estimation of non-stationary processes and its application to infer dynamic connectivity from fMRI. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2014; 8:228-39. [PMID: 24760946 PMCID: PMC10716865 DOI: 10.1109/tbcas.2014.2306732] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Time-varying covariance is an important metric to measure the statistical dependence between non-stationary biological processes. Time-varying covariance is conventionally estimated from short-time data segments within a window having a certain bandwidth, but it is difficult to choose an appropriate bandwidth to estimate covariance with different degrees of non-stationarity. This paper introduces a local polynomial regression (LPR) method to estimate time-varying covariance and performs an asymptotic analysis of the LPR covariance estimator to show that both the estimation bias and variance are functions of the bandwidth and there exists an optimal bandwidth to minimize the mean square error (MSE) locally. A data-driven variable bandwidth selection method, namely the intersection of confidence intervals (ICI), is adopted in LPR for adaptively determining the local optimal bandwidth that minimizes the MSE. Experimental results on simulated signals show that the LPR-ICI method can achieve robust and reliable performance in estimating time-varying covariance with different degrees of variations and under different noise scenarios, making it a powerful tool to study the dynamic relationship between non-stationary biomedical signals. Further, we apply the LPR-ICI method to estimate time-varying covariance of functional magnetic resonance imaging (fMRI) signals in a visual task for the inference of dynamic functional brain connectivity. The results show that the LPR-ICI method can effectively capture the transient connectivity patterns from fMRI.
Collapse
|
88
|
|
89
|
A Contrast Enhancement Framework with JPEG Artifacts Suppression. COMPUTER VISION – ECCV 2014 2014. [DOI: 10.1007/978-3-319-10605-2_12] [Citation(s) in RCA: 63] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
90
|
|
91
|
Zhang X, Xiong R, Fan X, Ma S, Gao W. Compression artifact reduction by overlapped-block transform coefficient estimation with block similarity. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:4613-4626. [PMID: 23893722 DOI: 10.1109/tip.2013.2274386] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Block transform coded images usually suffer from annoying artifacts at low bit rates, caused by the coarse quantization of transform coefficients. In this paper, we propose a new method to reduce compression artifacts by the overlapped-block transform coefficient estimation from non-local blocks. In the proposed method, the discrete cosine transform coefficients of each block are estimated by adaptively fusing two prediction values based on their reliabilities. One prediction is the quantized values of coefficients decoded from the compressed bitstream, whose reliability is determined by quantization steps. The other prediction is the weighted average of the coefficients in nonlocal blocks, whose reliability depends on the variance of the coefficients in these blocks. The weights are used to distinguish the effectiveness of the coefficients in nonlocal blocks to predict original coefficients and are determined by block similarity in transform domain. To solve the optimization problem, the overlapped blocks are divided into several subsets. Each subset contains nonoverlapped blocks covering the whole image and is optimized independently. Therefore, the overall optimization is reduced to a set of sub-optimization problems, which can be easily solved. Finally, we provide a strategy for parameter selection based on the compression levels. Experimental results show that the proposed method can remarkably reduce compression artifacts and significantly improve both the subjective and objective qualities of block transform coded images.
Collapse
|
92
|
Akçakaya M, Basha TA, Pflugi S, Foppa M, Kissinger KV, Hauser TH, Nezafat R. Localized spatio-temporal constraints for accelerated CMR perfusion. Magn Reson Med 2013; 72:629-39. [PMID: 24123058 DOI: 10.1002/mrm.24963] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2013] [Revised: 08/12/2013] [Accepted: 09/03/2013] [Indexed: 01/09/2023]
Abstract
PURPOSE To develop and evaluate an image reconstruction technique for cardiac MRI (CMR) perfusion that uses localized spatio-temporal constraints. METHODS CMR perfusion plays an important role in detecting myocardial ischemia in patients with coronary artery disease. Breath-hold k-t-based image acceleration techniques are typically used in CMR perfusion for superior spatial/temporal resolution and improved coverage. In this study, we propose a novel compressed sensing-based image reconstruction technique for CMR perfusion, with applicability to free-breathing examinations. This technique uses local spatio-temporal constraints by regularizing image patches across a small number of dynamics. The technique was compared with conventional dynamic-by-dynamic reconstruction, and sparsity regularization using a temporal principal-component (pc) basis, as well as zero-filled data in multislice two-dimensional (2D) and three-dimensional (3D) CMR perfusion. Qualitative image scores were used (1 = poor, 4 = excellent) to evaluate the technique in 3D perfusion in 10 patients and five healthy subjects. On four healthy subjects, the proposed technique was also compared with a breath-hold multislice 2D acquisition with parallel imaging in terms of signal intensity curves. RESULTS The proposed technique produced images that were superior in terms of spatial and temporal blurring compared with the other techniques, even in free-breathing datasets. The image scores indicated a significant improvement compared with other techniques in 3D perfusion (x-pc regularization, 2.8 ± 0.5 versus 2.3 ± 0.5; dynamic-by-dynamic, 1.7 ± 0.5; zero-filled, 1.1 ± 0.2). Signal intensity curves indicate similar dynamics of uptake between the proposed method with 3D acquisition and the breath-hold multislice 2D acquisition with parallel imaging. CONCLUSION The proposed reconstruction uses sparsity regularization based on localized information in both spatial and temporal domains for highly accelerated CMR perfusion with potential use in free-breathing 3D acquisitions.
Collapse
Affiliation(s)
- Mehmet Akçakaya
- Cardiovascular Division, Department of Medicine, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, Massachusetts, USA
| | | | | | | | | | | | | |
Collapse
|
93
|
Bao L, Robini M, Liu W, Zhu Y. Structure-adaptive sparse denoising for diffusion-tensor MRI. Med Image Anal 2013; 17:442-57. [DOI: 10.1016/j.media.2013.01.006] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2011] [Revised: 01/23/2013] [Accepted: 01/28/2013] [Indexed: 11/17/2022]
|
94
|
Xue F, Luisier F, Blu T. Multi-Wiener SURE-LET deconvolution. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:1954-1968. [PMID: 23335668 DOI: 10.1109/tip.2013.2240004] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we propose a novel deconvolution algorithm based on the minimization of a regularized Stein's unbiased risk estimate (SURE), which is a good estimate of the mean squared error. We linearly parametrize the deconvolution process by using multiple Wiener filters as elementary functions, followed by undecimated Haar-wavelet thresholding. Due to the quadratic nature of SURE and the linear parametrization, the deconvolution problem finally boils down to solving a linear system of equations, which is very fast and exact. The linear coefficients, i.e., the solution of the linear system of equations, constitute the best approximation of the optimal processing on the Wiener-Haar-threshold basis that we consider. In addition, the proposed multi-Wiener SURE-LET approach is applicable for both periodic and symmetric boundary conditions, and can thus be used in various practical scenarios. The very competitive (both in computation time and quality) results show that the proposed algorithm, which can be interpreted as a kind of nonlinear Wiener processing, can be used as a basic tool for building more sophisticated deconvolution algorithms.
Collapse
Affiliation(s)
- Feng Xue
- Department of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong.
| | | | | |
Collapse
|
95
|
Pyatykh S, Hesser J, Zheng L. Image noise level estimation by principal component analysis. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:687-699. [PMID: 23033431 DOI: 10.1109/tip.2012.2221728] [Citation(s) in RCA: 64] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
The problem of blind noise level estimation arises in many image processing applications, such as denoising, compression, and segmentation. In this paper, we propose a new noise level estimation method on the basis of principal component analysis of image blocks. We show that the noise variance can be estimated as the smallest eigenvalue of the image block covariance matrix. Compared with 13 existing methods, the proposed approach shows a good compromise between speed and accuracy. It is at least 15 times faster than methods with similar accuracy, and it is at least two times more accurate than other methods. Our method does not assume the existence of homogeneous areas in the input image and, hence, can successfully process images containing only textures.
Collapse
Affiliation(s)
- Stanislav Pyatykh
- University Medical Center Mannheim, Heidelberg University, Mannheim, Germany.
| | | | | |
Collapse
|
96
|
Maggioni M, Boracchi G, Foi A, Egiazarian K. Video denoising, deblocking, and enhancement through separable 4-D nonlocal spatiotemporal transforms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:3952-3966. [PMID: 22614644 DOI: 10.1109/tip.2012.2199324] [Citation(s) in RCA: 66] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
We propose a powerful video filtering algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher dimensional transform-domain representation of the observations is leveraged to enforce sparsity, and thus regularize the data: 3-D spatiotemporal volumes are constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are then grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e., self-similarity) along the fourth dimension of the group. Collaborative filtering is then realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, the collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original positions in the video. The proposed filtering procedure addresses several video processing applications, such as denoising, deblocking, and enhancement of both grayscale and color data. Experimental results prove the effectiveness of our method in terms of both subjective and objective visual quality, and show that it outperforms the state of the art in video denoising.
Collapse
Affiliation(s)
- Matteo Maggioni
- Department of Signal Processing, Tampere University of Technology, Tampere 33720, Finland.
| | | | | | | |
Collapse
|
97
|
Kim S, Lee E, Hayes MH, Paik J. Multifocusing and depth estimation using a color shift model-based computational camera. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:4152-4166. [PMID: 22695352 DOI: 10.1109/tip.2012.2202671] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This paper presents a novel approach to depth estimation using a multiple color-filter aperture (MCA) camera and its application to multifocusing. An image acquired by the MCA camera contains spatially varying misalignment among RGB color channels, where the direction and length of the misalignment is a function of the distance of an object from the plane of focus. Therefore, if the misalignment is estimated from the MCA output image, multifocusing and depth estimation become possible using a set of image processing algorithms. We first segment the image into multiple clusters having approximately uniform misalignment using a color-based region classification method, and then find a rectangular region that encloses each cluster. For each of the rectangular regions in the RGB color channels, color shifting vectors are estimated using a phase correlation method. After the set of three clusters are aligned in the opposite direction of the estimated color shifting vectors, the aligned clusters are fused to produce an approximately in-focus image. Because of the finite size of the color-filter apertures, the fused image still contains a certain amount of spatially varying out-of-focus blur, which is removed by using a truncated constrained least-squares filter followed by a spatially adaptive artifacts removing filter. Experimental results show that the MCA-based multifocusing method significantly enhances the visual quality of an image containing multiple objects of different distances, and can be fully or partially incorporated into multifocusing or extended depth of field systems. The MCA camera also realizes single camera-based depth estimation, where the displacement between multiple apertures plays a role of the baseline of a stereo vision system. Experimental results show that the estimated depth is accurate enough to perform a variety of vision-based tasks, such as image understanding, description, and robot vision.
Collapse
Affiliation(s)
- Sangjin Kim
- Department of Image, Chung-Ang University, Seoul 156-756, Korea.
| | | | | | | |
Collapse
|
98
|
Lu K, He N, Li L. Nonlocal means-based denoising for medical images. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2012; 2012:438617. [PMID: 22454694 PMCID: PMC3291081 DOI: 10.1155/2012/438617] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2011] [Accepted: 11/29/2011] [Indexed: 12/03/2022]
Abstract
Medical images often consist of low-contrast objects corrupted by random noise arising in the image acquisition process. Thus, image denoising is one of the fundamental tasks required by medical imaging analysis. Nonlocal means (NL-means) method provides a powerful framework for denoising. In this work, we investigate an adaptive denoising scheme based on the patch NL-means algorithm for medical imaging denoising. In contrast with the traditional NL-means algorithm, the proposed adaptive NL-means denoising scheme has three unique features. First, we use a restricted local neighbourhood where the true intensity for each noisy pixel is estimated from a set of selected neighbouring pixels to perform the denoising process. Second, the weights used are calculated thanks to the similarity between the patch to denoise and the other patches candidates. Finally, we apply the steering kernel to preserve the details of the images. The proposed method has been compared with similar state-of-art methods over synthetic and real clinical medical images showing an improved performance in all cases analyzed.
Collapse
Affiliation(s)
- Ke Lu
- College of Computing & Communication Engineering, Graduate University of Chinese Academy of Sciences, Beijing 100049, China.
| | | | | |
Collapse
|
99
|
Shutao Li, Leyuan Fang, Haitao Yin. An Efficient Dictionary Learning Algorithm and Its Application to 3-D Medical Image Denoising. IEEE Trans Biomed Eng 2012; 59:417-27. [DOI: 10.1109/tbme.2011.2173935] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
100
|
CHEN GUANGYI, BUI TIEND, KRZYZAK ADAM. DENOISING OF THREE-DIMENSIONAL DATA CUBE USING BIVARIATE WAVELET SHRINKING. INT J PATTERN RECOGN 2011. [DOI: 10.1142/s0218001411008725] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The denoising of a natural signal/image corrupted by Gaussian white noise is a classical problem in signal/image processing. However, it is still in its infancy to denoise high dimensional data. In this paper, we extended Sendur and Selesnick's bivariate wavelet thresholding from two-dimensional (2D) image denoising to three-dimensional (3D) data cube denoising. Our study shows that bivariate wavelet thresholding is still valid for 3D data cubes. Experimental results show that bivariate wavelet thresholding on 3D data cube is better than performing 2D bivariate wavelet thresholding on every spectral band separately, VisuShrink, and Chen and Zhu's 3-scale denoising.
Collapse
Affiliation(s)
- GUANGYI CHEN
- Department of Mathematics and Statistics, Concordia University, Montreal, Quebec, Canada H3G 1M8, Canada
| | - TIEN D. BUI
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada H3G 1M8, Canada
| | - ADAM KRZYZAK
- Department of Computer Science and Software Engineering, Concordia University, Montreal, Quebec, Canada H3G 1M8, Canada
| |
Collapse
|