1
|
Fathi A, Naghsh-Nilchi AR. Efficient image denoising method based on a new adaptive wavelet packet thresholding function. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:3981-3990. [PMID: 22645265 DOI: 10.1109/tip.2012.2200491] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This paper proposes a statistically optimum adaptive wavelet packet (WP) thresholding function for image denoising based on the generalized Gaussian distribution. It applies computationally efficient multilevel WP decomposition to noisy images to obtain the best tree or optimal wavelet basis, utilizing Shannon entropy. It selects an adaptive threshold value which is level and subband dependent based on analyzing the statistical parameters of subband coefficients. In the utilized thresholding function, which is based on a maximum a posteriori estimate, the modified version of dominant coefficients was estimated by optimal linear interpolation between each coefficient and the mean value of the corresponding subband. Experimental results, on several test images under different noise intensity conditions, show that the proposed algorithm, called OLI-Shrink, yields better peak signal noise ratio and superior visual image quality-measured by universal image quality index-compared to standard denoising methods, especially in the presence of high noise intensity. It also outperforms some of the best state-of-the-art wavelet-based denoising techniques.
Collapse
Affiliation(s)
- Abdolhossein Fathi
- Department of Computer Engineering, University of Isfahan, Isfahan 81744, Iran.
| | | |
Collapse
|
2
|
Aulí-Llinàs F. Stationary probability model for bitplane image coding through local average of wavelet coefficients. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2011; 20:2153-2165. [PMID: 21324777 DOI: 10.1109/tip.2011.2114892] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
This paper introduces a probability model for symbols emitted by bitplane image coding engines, which is conceived from a precise characterization of the signal produced by a wavelet transform. Main insights behind the proposed model are the estimation of the magnitude of wavelet coefficients as the arithmetic mean of its neighbors' magnitude (the so-called local average), and the assumption that emitted bits are under-complete representations of the underlying signal. The local average-based probability model is introduced in the framework of JPEG2000. While the resulting system is not JPEG2000 compatible, it preserves all features of the standard. Practical benefits of our model are enhanced coding efficiency, more opportunities for parallelism, and improved spatial scalability.
Collapse
Affiliation(s)
- Francesc Aulí-Llinàs
- Department of Information and Communications Engineering, Universitat Autònoma de Barcelona, Bellaterra, Spain.
| |
Collapse
|
3
|
Xu J, Ilgin H, Liu Q, Kassam A, Sclabassi RJ, Chaparro LF, Sun M. Content-based video coding for remote monitoring of neurosurgery. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2004:3136-9. [PMID: 17270944 DOI: 10.1109/iembs.2004.1403885] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Transmitting high-quality neurophysiology video via the Internet is a challenging problem in telemedicine. We propose a novel content-selective video data processing and compression method based on the Discrete Wavelet Transform (DWT). Our algorithm is adaptive to intraoperative monitoring video data and has a great scalability on real-time network bandwidth allocation. When compared with the general-purpose video compression methods, our method offers higher quality within the critical field of neurosurgery.
Collapse
Affiliation(s)
- Jian Xu
- Laboratory for Computational Neuroscience Department of Electrical Engineering University of Pittsburgh, Pittsburgh, PA 15261, USA
| | | | | | | | | | | | | |
Collapse
|
4
|
Gupta N, Swamy MNS, Plotkin E. Despeckling of medical ultrasound images using data and rate adaptive lossy compression. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:743-54. [PMID: 15957598 DOI: 10.1109/tmi.2005.847401] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.
Collapse
Affiliation(s)
- Nikhil Gupta
- Center for Signal Processing and Communications, Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada.
| | | | | |
Collapse
|
5
|
Zhang L, Zhang D. Characterization of Palmprints by Wavelet Signatures via Directional Context Modeling. ACTA ACUST UNITED AC 2004; 34:1335-47. [PMID: 15484907 DOI: 10.1109/tsmcb.2004.824521] [Citation(s) in RCA: 159] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The palmprint is one of the most reliable physiological characteristics that can be used to distinguish between individuals. Current palmprint-based systems are more user friendly, more cost effective, and require fewer data signatures than traditional fingerprint-based identification systems. The principal lines and wrinkles captured in a low-resolution palmprint image provide more than enough information to uniquely identify an individual. This paper presents a palmprint identification scheme that characterizes a palmprint using a set of statistical signatures. The palmprint is first transformed into the wavelet domain, and the directional context of each wavelet subband is defined and computed in order to collect the predominant coefficients of its principal lines and wrinkles. A set of statistical signatures, which includes gravity center, density, spatial dispersivity and energy, is then defined to characterize the palmprint with the selected directional context values. A classification and identification scheme based on these signatures is subsequently developed. This scheme exploits the features of principal lines and prominent wrinkles sufficiently and achieves satisfactory results. Compared with the line-segments-matching or interesting-points-matching based palmprint verification schemes, the proposed scheme uses a much smaller amount of data signatures. It also provides a convenient classification strategy and more accurate identification.
Collapse
Affiliation(s)
- Lei Zhang
- Biometrics Research Centre, Department of Computing, The Hong Kong Polytechnic University, Hung Hum, Kowloon, Hong Kong
| | | |
Collapse
|
6
|
|
7
|
Li X. On exploiting geometric constraint of image wavelet coefficients. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:1378-1387. [PMID: 18244695 DOI: 10.1109/tip.2003.818011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we investigate the problem of how to exploit geometric constraint of edges in wavelet-based image coding.The value of studying this problem is the potential coding gain brought by improved probabilistic models of wavelet high-band coefficients. Novel phase shifting and prediction algorithms are derived in the wavelet space. It is demonstrated that after resolving the phase uncertainty, high-band wavelet coefficients can be better modeled by biased-mean probability models rather than the existing zero-mean ones. In lossy coding, the coding gain brought by the biased-mean model is quantitatively analyzed within the conventional DPCM coding framework. Experiment results have shown the proposed phase shifting and prediction scheme improves both subjective and objective performance of wavelet-based image coders.
Collapse
Affiliation(s)
- Xin Li
- Lane Dept. of Comput. Sci. and Electr. Eng., West Virginia Univ., Morgantown, WV 26506-6109, USA.
| |
Collapse
|
8
|
Hsieh MS, Tseng DC. Image subband coding using fuzzy inference and adaptive quantization. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. PART B, CYBERNETICS : A PUBLICATION OF THE IEEE SYSTEMS, MAN, AND CYBERNETICS SOCIETY 2003; 33:509-513. [PMID: 18238197 DOI: 10.1109/tsmcb.2003.811131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Wavelet image decomposition generates a hierarchical data structure to represent an image. Recently, a new class of image compression algorithms has been developed for exploiting dependencies between the hierarchical wavelet coefficients using zerotrees. This paper deals with a fuzzy inference filter for image entropy coding by choosing significant coefficients and zerotree roots in the higher frequency wavelet subbands. Moreover, an adaptive quantization is proposed to improve the coding performance. Evaluating with the standard images, the proposed approaches are comparable or superior to most state-of-the-art coders. Based on the fuzzy energy judgment, the proposed approaches can achieve an excellent performance on the combination applications of image compression and watermarking.
Collapse
Affiliation(s)
- Ming-Shing Hsieh
- Inst. of Comput. Sci. & Inf. Eng., Nat. Central Univ., Chung-li, Taiwan
| | | |
Collapse
|
9
|
Hawwar Y, Reza A. Spatially adaptive multiplicative noise image denoising technique. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:1397-1404. [PMID: 18249708 DOI: 10.1109/tip.2002.804526] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A new image denoising technique in the wavelet transform domain for multiplicative noise is presented. Unlike most existing techniques, this approach does not require prior modeling of either the image or the noise statistics. It uses the variance of the detail wavelet coefficients to decide whether to smooth or to preserve these coefficients. The approach takes advantage of wavelet transform property in generating three detail subimages each providing specific information with certain feature directivity. This allows the ability to combine information provided by different detail subimages to direct the filtering operation. The algorithm uses the hypothesis test based on the F-distribution to decide whether detail wavelet coefficients are due to image related features or they are due to noise. The effectiveness of the proposed technique is tested for orthogonal as well as biorthogonal mother wavelets in order to study the effect of the smoothing process under different wavelet types.
Collapse
Affiliation(s)
- Yousef Hawwar
- Mobility Solutions Group, Lucent Technol., Whippany, NJ 07981, USA.
| | | |
Collapse
|
10
|
Rissanen J, Yu B. Coding and Compression: A Happy Union of Theory and Practice. J Am Stat Assoc 2000. [DOI: 10.1080/01621459.2000.10474290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
11
|
Chang SG, Yu B, Vetterli M. Adaptive wavelet thresholding for image denoising and compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:1532-46. [PMID: 18262991 DOI: 10.1109/83.862633] [Citation(s) in RCA: 391] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/14/2023]
Abstract
The first part of this paper proposes an adaptive, data-driven threshold for image denoising via wavelet soft-thresholding. The threshold is derived in a Bayesian framework, and the prior used on the wavelet coefficients is the generalized Gaussian distribution (GGD) widely used in image processing applications. The proposed threshold is simple and closed-form, and it is adaptive to each subband because it depends on data-driven estimates of the parameters. Experimental results show that the proposed method, called BayesShrink, is typically within 5% of the MSE of the best soft-thresholding benchmark with the image assumed known. It also outperforms SureShrink (Donoho and Johnstone 1994, 1995; Donoho 1995) most of the time. The second part of the paper attempts to further validate claims that lossy compression can be used for denoising. The BayesShrink threshold can aid in the parameter selection of a coder designed with the intention of denoising, and thus achieving simultaneous denoising and compression. Specifically, the zero-zone in the quantization step of compression is analogous to the threshold value in the thresholding function. The remaining coder design parameters are chosen based on a criterion derived from Rissanen's minimum description length (MDL) principle. Experiments show that this compression method does indeed remove noise significantly, especially for large noise power. However, it introduces quantization noise and should be used only if bitrate were an additional concern to denoising.
Collapse
Affiliation(s)
- S G Chang
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|
12
|
Chang SG, Yu B, Vetterli M. Spatially adaptive wavelet thresholding with context modeling for image denoising. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:1522-31. [PMID: 18262990 DOI: 10.1109/83.862630] [Citation(s) in RCA: 143] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
The method of wavelet thresholding for removing noise, or denoising, has been researched extensively due to its effectiveness and simplicity. Much of the literature has focused on developing the best uniform threshold or best basis selection. However, not much has been done to make the threshold values adaptive to the spatially changing statistics of images. Such adaptivity can improve the wavelet thresholding performance because it allows additional local information of the image (such as the identification of smooth or edge regions) to be incorporated into the algorithm. This work proposes a spatially adaptive wavelet thresholding method based on context modeling, a common technique used in image compression to adapt the coder to changing image characteristics. Each wavelet coefficient is modeled as a random variable of a generalized Gaussian distribution with an unknown parameter. Context modeling is used to estimate the parameter for each coefficient, which is then used to adapt the thresholding strategy. This spatially adaptive thresholding is extended to the overcomplete wavelet expansion, which yields better results than the orthogonal transform. Experimental results show that spatially adaptive wavelet thresholding yields significantly superior image quality and lower MSE than the best uniform thresholding with the original image assumed known.
Collapse
Affiliation(s)
- S G Chang
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|
13
|
Chrysafis C, Ortega A. Line-based, reduced memory, wavelet image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:378-389. [PMID: 18255410 DOI: 10.1109/83.826776] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper addresses the problem of low memory wavelet image compression. While wavelet or subband coding of images has been shown to be superior to more traditional transform coding techniques, little attention has been paid until recently to the important issue of whether both the wavelet transforms and the subsequent coding can be implemented in low memory without significant loss in performance. We present a complete system to perform low memory wavelet image coding. Our approach is "line-based" in that the images are read line by line and only the minimum required number of lines is kept in memory. There are two main contributions of our work. First, we introduce a line-based approach for the implementation of the wavelet transform, which yields the same results as a "normal" implementation, but where, unlike prior work, we address memory issues arising from the need to synchronize encoder and decoder. Second, we propose a novel context-based encoder which requires no global information and stores only a local set of wavelet coefficients. This low memory coder achieves performance comparable to state of the art coders at a fraction of their memory utilization.
Collapse
Affiliation(s)
- C Chrysafis
- Hewlett-Packard Laboratories, Palo Alto, CA 94304, USA.
| | | |
Collapse
|