1
|
Ates HF, Orchard MT. Spherical coding algorithm for wavelet image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:1015-1024. [PMID: 19342336 DOI: 10.1109/tip.2009.2014502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In recent literature, there exist many high-performance wavelet coders that use different spatially adaptive coding techniques in order to exploit the spatial energy compaction property of the wavelet transform. Two crucial issues in adaptive methods are the level of flexibility and the coding efficiency achieved while modeling different image regions and allocating bitrate within the wavelet subbands. In this paper, we introduce the "spherical coder," which provides a new adaptive framework for handling these issues in a simple and effective manner. The coder uses local energy as a direct measure to differentiate between parts of the wavelet subband and to decide how to allocate the available bitrate. As local energy becomes available at finer resolutions, i.e., in smaller size windows, the coder automatically updates its decisions about how to spend the bitrate. We use a hierarchical set of variables to specify and code the local energy up to the highest resolution, i.e., the energy of individual wavelet coefficients. The overall scheme is nonredundant, meaning that the subband information is conveyed using this equivalent set of variables without the need for any side parameters. Despite its simplicity, the algorithm produces PSNR results that are competitive with the state-of-art coders in literature.
Collapse
Affiliation(s)
- Hasan F Ates
- Department of Electronics Engineering, Isik University, Sile, Istanbul, Turkey.
| | | |
Collapse
|
2
|
An J, Cai Z. Embedded trellis coded quantization for JPEG2000. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2008; 17:1570-1573. [PMID: 18701395 DOI: 10.1109/tip.2008.2001157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A modified embedded trellis coded quantization (TCQ) for JPEG2000 is presented in this paper. The method for approximately inverting TCQ in the absence of the least significant bits is improved. Experimental results, presented using the optimal rate control algorithm and different embedded TCQ formulations, show that modified embedded TCQ yields significant performance improvement compared to the original one in JPEG2000.
Collapse
Affiliation(s)
- Jicheng An
- College of Information Science and Engineering, Central South University, Changsha, Hunan, China.
| | | |
Collapse
|
3
|
Gaubatz MD, Hemami SS. Efficient entropy estimation based on doubly stochastic models for quantized wavelet image data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:967-81. [PMID: 17405430 DOI: 10.1109/tip.2007.891784] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Under a rate constraint, wavelet-based image coding involves strategic discarding of information such that the remaining data can be described with a given amount of rate. In a practical coding system, this task requires knowledge of the relationship between quantization step size and compressed rate for each group of wavelet coefficients, the R-Q curve. A common approach to this problem is to fit each subband with a scalar probability distribution and compute entropy estimates based on the model. This approach is not effective at rates below 1.0 bits-per-pixel because the distributions of quantized data do not reflect the dependencies in coefficient magnitudes. These dependencies can be addressed with doubly stochastic models, which have been previously proposed to characterize more localized behavior, though there are tradeoffs between storage, computation time, and accuracy. Using a doubly stochastic generalized Gaussian model, it is demonstrated that the relationship between step size and rate is accurately described by a low degree polynomial in the logarithm of the step size. Based on this observation, an entropy estimation scheme is presented which offers an excellent tradeoff between speed and accuracy; after a simple data-gathering step, estimates are computed instantaneously by evaluating a single polynomial for each group of wavelet coefficients quantized with the same step size. These estimates are on average within 3% of a desired target rate for several of state-of-the-art coders.
Collapse
Affiliation(s)
- Matthew D Gaubatz
- Department of Electrical and Computer Engineering, Comell University, Ithaca, NY 14853, USA.
| | | |
Collapse
|
4
|
Lam PM, Leung CS, Wong TT. Noise-resistant fitting for spherical harmonics. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2006; 12:254-65. [PMID: 16509384 DOI: 10.1109/tvcg.2006.34] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Spherical harmonic (SH) basis functions have been widely used for representing spherical functions in modeling various illumination properties. They can compactly represent low-frequency spherical functions. However, when the unconstrained least square method is used for estimating the SH coefficients of a hemispherical function, the magnitude of these SH coefficients could be very large. Hence, the rendering result is very sensitive to quantization noise (introduced by modern texture compression like S3TC, IEEE half float data type on GPU, or other lossy compression methods) in these SH coefficients. Our experiments show that, as the precision of SH coefficients is reduced, the rendered images may exhibit annoying visual artifacts. To reduce the noise sensitivity of the SH coefficients, this paper first discusses how the magnitude of SH coefficients affects the rendering result when there is quantization noise. Then, two fast fitting methods for estimating the noise-resistant SH coefficients are proposed. They can effectively control the magnitude of the estimated SH coefficients and, hence, suppress the rendering artifacts. Both statistical and visual results confirm our theory.
Collapse
Affiliation(s)
- Ping-Man Lam
- Department of Electronic Engineering, City University of Hong Kong, Kowloon.
| | | | | |
Collapse
|
5
|
Gupta N, Swamy MNS, Plotkin E. Despeckling of medical ultrasound images using data and rate adaptive lossy compression. IEEE TRANSACTIONS ON MEDICAL IMAGING 2005; 24:743-54. [PMID: 15957598 DOI: 10.1109/tmi.2005.847401] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
A novel technique for despeckling the medical ultrasound images using lossy compression is presented. The logarithm of the input image is first transformed to the multiscale wavelet domain. It is then shown that the subband coefficients of the log-transformed ultrasound image can be successfully modeled using the generalized Laplacian distribution. Based on this modeling, a simple adaptation of the zero-zone and reconstruction levels of the uniform threshold quantizer is proposed in order to achieve simultaneous despeckling and quantization. This adaptation is based on: (1) an estimate of the corrupting speckle noise level in the image; (2) the estimated statistics of the noise-free subband coefficients; and (3) the required compression rate. The Laplacian distribution is considered as a special case of the generalized Laplacian distribution and its efficacy is demonstrated for the problem under consideration. Context-based classification is also applied to the noisy coefficients to enhance the performance of the subband coder. Simulation results using a contrast detail phantom image and several real ultrasound images are presented. To validate the performance of the proposed scheme, comparison with two two-stage schemes, wherein the speckled image is first filtered and then compressed using the state-of-the-art JPEG2000 encoder, is presented. Experimental results show that the proposed scheme works better, both in terms of the signal to noise ratio and the visual quality.
Collapse
Affiliation(s)
- Nikhil Gupta
- Center for Signal Processing and Communications, Department of Electrical and Computer Engineering, Concordia University, Montreal, QC H3G 1M8, Canada.
| | | | | |
Collapse
|
6
|
Liu Z, Karam LJ. Mutual information-based analysis of JPEG2000 contexts. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:411-422. [PMID: 15825477 DOI: 10.1109/tip.2004.841199] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
Collapse
Affiliation(s)
- Zhen Liu
- Qualcomm, Inc., San Diego, CA 92121-1714, USA.
| | | |
Collapse
|
7
|
Abdelwahab MM, Mikhael WB. Multistage classification and recognition that employs vector quantization coding and criteria extracted from nonorthogonal and preprocessed signal representations. APPLIED OPTICS 2004; 43:416-424. [PMID: 14735960 DOI: 10.1364/ao.43.000416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Classification decision tree algorithms have recently been used in pattern-recognition problems. In this paper, we propose a self-designing system that uses the classification tree algorithms and that is capable of recognizing a large number of signals. Preprocessing techniques are used to make the recognition process more effective. A combination of the original, as well as the preprocessed, signals is projected into different transform domains. Enormous sets of criteria that characterize the signals can be developed from the signal representations in these domains. At each node of the classification tree, an appropriately selected criterion is optimized with respect to desirable performance features such as complexity and noise immunity. The criterion is then employed in conjunction with a vector quantizer to divide the signals presented at a particular node in that stage into two approximately equal groups. When the process is complete, each signal is represented by a unique composite binary word index, which corresponds to the signal path through the tree, from the input to one of the terminal nodes of the tree. Experimental results verify the excellent classification accuracy of this system. High performance is maintained for both noisy and corrupt data.
Collapse
Affiliation(s)
- Manal M Abdelwahab
- School of Electrical Engineering and Computer Science, University of Central Florida, Orlando, Florida 32816-2450, USA
| | | |
Collapse
|
8
|
Wang Z, Lee Y, Leung CS, Wong TT, Zhu YS. An improved optimal bit allocation method for sub-band coding. Pattern Recognit Lett 2003. [DOI: 10.1016/s0167-8655(03)00161-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
9
|
Li X. On exploiting geometric constraint of image wavelet coefficients. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:1378-1387. [PMID: 18244695 DOI: 10.1109/tip.2003.818011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we investigate the problem of how to exploit geometric constraint of edges in wavelet-based image coding.The value of studying this problem is the potential coding gain brought by improved probabilistic models of wavelet high-band coefficients. Novel phase shifting and prediction algorithms are derived in the wavelet space. It is demonstrated that after resolving the phase uncertainty, high-band wavelet coefficients can be better modeled by biased-mean probability models rather than the existing zero-mean ones. In lossy coding, the coding gain brought by the biased-mean model is quantitatively analyzed within the conventional DPCM coding framework. Experiment results have shown the proposed phase shifting and prediction scheme improves both subjective and objective performance of wavelet-based image coders.
Collapse
Affiliation(s)
- Xin Li
- Lane Dept. of Comput. Sci. and Electr. Eng., West Virginia Univ., Morgantown, WV 26506-6109, USA.
| |
Collapse
|
10
|
Cao L, Chen CW. Content-based multiple bitstream image transmission over noisy channels. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:1305-1313. [PMID: 18249700 DOI: 10.1109/tip.2002.804525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we propose a novel combined source and channel coding scheme for image transmission over noisy channels. The main feature of the proposed scheme is a systematic decomposition of image sources so that unequal error protection can be applied according to not only bit error sensitivity but also visual content importance. The wavelet transform is adopted to hierarchically decompose the image. The association between the wavelet coefficients and what they represent spatially in the original image is fully exploited so that wavelet blocks are classified based on their corresponding image content. The classification produces wavelet blocks in each class with similar content and statistics, therefore enables high performance source compression using the set partitioning in hierarchical trees (SPIHT) algorithm. To combat the channel noise, an unequal error protection strategy with rate-compatible punctured convolutional/cyclic redundancy check (RCPC/CRC) codes is implemented based on the bit contribution to both peak signal-to-noise ratio (PSNR) and visual quality. At the receiving end, a postprocessing method making use of the SPIHT decoding structure and the classification map is developed to restore the degradation due to the residual error after channel decoding. Experimental results show that the proposed scheme is indeed able to provide protection both for the bits that are more sensitive to errors and for the more important visual content under a noisy transmission environment. In particular, the reconstructed images illustrate consistently better visual quality than using the single-bitstream-based schemes.
Collapse
Affiliation(s)
- Lei Cao
- Department of Electrical Engineering, University of Missouri-Columbia, Columbia, MO 65211, USA.
| | | |
Collapse
|
11
|
Moulin P, Mihçak MK. A framework for evaluating the data-hiding capacity of image sources. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:1029-1042. [PMID: 18249724 DOI: 10.1109/tip.2002.802512] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
An information-theoretic model for image watermarking and data hiding is presented in this paper. Previous theoretical results are used to characterize the fundamental capacity limits of image watermarking and data-hiding systems. Capacity is determined by the statistical model used for the host image, by the distortion constraints on the data hider and the attacker, and by the information available to the data hider, to the attacker, and to the decoder. We consider autoregressive, block-DCT, and wavelet statistical models for images and compute data-hiding capacity for compressed and uncompressed host-image sources. Closed-form expressions are obtained under sparse-model approximations. Models for geometric attacks and distortion measures that are invariant to such attacks are considered.
Collapse
Affiliation(s)
- Pierre Moulin
- Beckman Institute, Coordinated Science Laboratory and Electrical and Computer Engineering Department, University of Illinois, Urbana, IL 61801, USA.
| | | |
Collapse
|
12
|
Berghorn W, Boskamp T, Lang M, Peitgen HO. Fast variable run-length coding for embedded progressive wavelet-based image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:1781-1790. [PMID: 18255518 DOI: 10.1109/83.974563] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Run-length coding has attracted much attention in wavelet-based image compression because of its simplicity and potentially low complexity. The main drawback is the inferior RD-performance compared to the state-of-the-art-coder SPIHT. In this paper, we concentrate on the embedded progressive run-length code of Tian and Wells (1996, 1998). We consider significance sequences drawn from the scan in the dominant pass. It turns out that self-similar curves for scanning the dominant pass increase the compression efficiency significantly. This is a consequence of the correlation of direct neighbors in the wavelet domain. This dependence can be better exploited by using groups of coefficients, similar to the SPIHT algorithm. This results in a new and very fast coding algorithm, which shows performance similar to the state-of-the-art coder SPIHT, but with lower complexity and small and fixed memory overhead.
Collapse
Affiliation(s)
- W Berghorn
- Center for Med. Diagnostic Syst. and Visualization, Bremen, Germany.
| | | | | | | |
Collapse
|
13
|
Liu J, Moulin P. Information-theoretic analysis of interscale and intrascale dependencies between image wavelet coefficients. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:1647-1658. [PMID: 18255507 DOI: 10.1109/83.967393] [Citation(s) in RCA: 36] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper presents an information-theoretic analysis of statistical dependencies between image wavelet coefficients. The dependencies are measured using mutual information, which has a fundamental relationship to data compression, estimation, and classification performance. Mutual information is computed analytically for several statistical image models, and depends strongly on the choice of wavelet filters. In the absence of an explicit statistical model, a method is studied for reliably estimating mutual information from image data. The validity of the model-based and data-driven approaches is assessed on representative real-world photographic images. Our results are consistent with empirical observations that coding schemes exploiting inter- and intrascale dependencies alone perform very well, whereas taking both into account does not significantly improve coding performance. A similar observation applies to other image processing applications.
Collapse
Affiliation(s)
- J Liu
- Xerox Palo Alto Research Center, Palo Alto, CA 94304, USA.
| | | |
Collapse
|
14
|
Bilgin A, Zweig G, Marcellin MW. Three-dimensional image compression with integer wavelet transforms. APPLIED OPTICS 2000; 39:1799-1814. [PMID: 18345077 DOI: 10.1364/ao.39.001799] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
A three-dimensional (3-D) image-compression algorithm based on integer wavelet transforms and zerotree coding is presented. The embedded coding of zerotrees of wavelet coefficients (EZW) algorithm is extended to three dimensions, and context-based adaptive arithmetic coding is used to improve its performance. The resultant algorithm, 3-D CB-EZW, efficiently encodes 3-D image data by the exploitation of the dependencies in all dimensions, while enabling lossy and lossless decompression from the same bit stream. Compared with the best available two-dimensional lossless compression techniques, the 3-D CB-EZW algorithm produced averages of 22%, 25%, and 20% decreases in compressed file sizes for computed tomography, magnetic resonance, and Airborne Visible Infrared Imaging Spectrometer images, respectively. The progressive performance of the algorithm is also compared with other lossy progressive-coding algorithms.
Collapse
Affiliation(s)
- A Bilgin
- Department of Electrical and Computer Engineering, University of Arizona, Tucson, Arizona 85721, USA.
| | | | | |
Collapse
|
15
|
Servetto SD, Ramchandran K, Vaishampayan VA, Nahrstedt K. Multiple description wavelet based image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:813-826. [PMID: 18255453 DOI: 10.1109/83.841528] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We consider the problem of coding images for transmission over error-prone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be error-free, but some of the data may never reach the receiver. The proposed algorithms are based on a combination of multiple description scalar quantizers with techniques successfully applied to the construction of some of the most efficient subband coders. A given image is encoded into multiple independent packets of roughly equal length. When packets are lost, the quality of the approximation computed at the receiver depends only on the number of packets received, but does not depend on exactly which packets are actually received. When compared with previously reported results on the performance of robust image coders based on multiple descriptions, on standard test images, our coders attain similar PSNR values using typically about 50-60% of the bit rate required by these other state-of-the-art coders, while at the same time providing significantly more freedom in the mechanism for allocation of redundancy among descriptions.
Collapse
Affiliation(s)
- S D Servetto
- Laboratoire de Communications Audiovisuelles, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland.
| | | | | | | |
Collapse
|
16
|
Chrysafis C, Ortega A. Line-based, reduced memory, wavelet image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:378-389. [PMID: 18255410 DOI: 10.1109/83.826776] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper addresses the problem of low memory wavelet image compression. While wavelet or subband coding of images has been shown to be superior to more traditional transform coding techniques, little attention has been paid until recently to the important issue of whether both the wavelet transforms and the subsequent coding can be implemented in low memory without significant loss in performance. We present a complete system to perform low memory wavelet image coding. Our approach is "line-based" in that the images are read line by line and only the minimum required number of lines is kept in memory. There are two main contributions of our work. First, we introduce a line-based approach for the implementation of the wavelet transform, which yields the same results as a "normal" implementation, but where, unlike prior work, we address memory issues arising from the need to synchronize encoder and decoder. Second, we propose a novel context-based encoder which requires no global information and stores only a local set of wavelet coefficients. This low memory coder achieves performance comparable to state of the art coders at a fraction of their memory utilization.
Collapse
Affiliation(s)
- C Chrysafis
- Hewlett-Packard Laboratories, Palo Alto, CA 94304, USA.
| | | |
Collapse
|
17
|
Munteanu A, Cornelis J, Van der Auwera G, Cristea P. Wavelet image compression--the quadtree coding approach. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE : A PUBLICATION OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 1999; 3:176-85. [PMID: 10719481 DOI: 10.1109/4233.788579] [Citation(s) in RCA: 75] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Perfect reconstruction, quality scalability, and region-of-interest coding are basic features needed for the image compression schemes used in telemedicine applications. This paper proposes a new wavelet-based embedded compression technique that efficiently exploits the intraband dependencies and uses a quadtree-based approach to encode the significance maps. The algorithm produces a losslessly compressed embedded data stream, supports quality scalability, and permits region-of-interest coding. Moreover, experimental results obtained on various images show that the proposed algorithm provides competitive lossless/lossy compression results. The proposed technique is well suited for telemedicine applications that require fast interactive handling of large image sets, over networks with limited and/or variable bandwidth.
Collapse
Affiliation(s)
- A Munteanu
- Electronics and Information Processing Department, Vrije Universiteit Brussel, Belgium
| | | | | | | |
Collapse
|
18
|
Bilgin A, Sementilli PJ, Marcellin MW. Progressive image coding using trellis coded quantization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1638-1643. [PMID: 18267438 DOI: 10.1109/83.799891] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this work, we present coding techniques that enable progressive transmission when trellis coded quantization (TCQ) is applied to wavelet coefficients. A method for approximately inverting TCQ in the absence of least significant bits is developed. Results are presented using different rate allocation strategies and different entropy coders. The proposed wavelet-TCQ coder yields excellent coding efficiency while supporting progressive modes analogous to those available in JPEG.
Collapse
|
19
|
Jafarkhani H, Farvardin N. Fast reconstruction of subband-decomposed progressively transmitted signals. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:891-898. [PMID: 18267502 DOI: 10.1109/83.772225] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We propose a fast reconstruction method for a subband-decomposed, progressive signal coding system. We show that unlike the conventional approach which requires a fixed computational complexity, the computational complexity of the proposed approach is proportional to the number of refined coefficients at each level of progression. Therefore, unrefined coefficients do not add to the computational complexity of the proposed scheme. It is shown, through specific examples, that the proposed approach can lead to significant reductions in reconstruction complexity. Furthermore, the proposed approach provides the capability for an online updating of the reconstructed image based on receiving the refinement of each coefficient.
Collapse
Affiliation(s)
- H Jafarkhani
- Dept. of Electr. Eng., Maryland Univ., College Park, MD 20742, USA.
| | | |
Collapse
|
20
|
Buccigrossi RW, Simoncelli EP. Image compression via joint statistical characterization in the wavelet domain. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1688-1701. [PMID: 18267447 DOI: 10.1109/83.806616] [Citation(s) in RCA: 69] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We develop a probability model for natural images, based on empirical observation of their statistics in the wavelet transform domain. Pairs of wavelet coefficients, corresponding to basis functions at adjacent spatial locations, orientations, and scales, are found to be non-Gaussian in both their marginal and joint statistical properties. Specifically, their marginals are heavy-tailed, and although they are typically decorrelated, their magnitudes are highly correlated. We propose a Markov model that explains these dependencies using a linear predictor for magnitude coupled with both multiplicative and additive uncertainties, and show that it accounts for the statistics of a wide variety of images including photographic images, graphical images, and medical images. In order to directly demonstrate the power of the model, we construct an image coder called EPWIC (embedded predictive wavelet image coder), in which subband coefficients are encoded one bitplane at a time using a nonadaptive arithmetic encoder that utilizes conditional probabilities calculated from the model. Bitplanes are ordered using a greedy algorithm that considers the MSE reduction per encoded bit. The decoder uses the statistical model to predict coefficient values based on the bits it has received. Despite the simplicity of the model, the rate-distortion performance of the coder is roughly comparable to the best image coders in the literature.
Collapse
|
21
|
Pan J. Vector-scalar classification for transform image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1175-1182. [PMID: 18267535 DOI: 10.1109/83.784430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper introduces vector-scalar classification (VSC) for discrete cosine transform (DCT) coding of images. Two main characteristics of VSC differentiate it from previously proposed classification methods. First, pattern classification is effectively performed in the energy domain of the DCT subvectors using vector quantization. Second, the subvectors, instead of the DCT vectors, are mapped into a prescribed number of classes according to a pattern-to-class link established by scalar quantization. Simulation results demonstrate that the DCT coding systems based on VSC are superior to the other proposed DCT coding systems and are competitive compared to the best subband and wavelet coding systems reported in the literature.
Collapse
Affiliation(s)
- J Pan
- CommQuest, Encinitas, CA 92024, USA.
| |
Collapse
|
22
|
Yoo Y, Ortega A, Yu B. Image subband coding using context-based classification and adaptive quantization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1702-1715. [PMID: 18267448 DOI: 10.1109/83.806617] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.
Collapse
Affiliation(s)
- Y Yoo
- Media Technologies Laboratory, DSP Solutions R&D Center, Texas Instruments Inc., Dallas, TX 75243, USA.
| | | | | |
Collapse
|
23
|
Kasner JH, Marcellin MW, Hunt BR. Universal trellis coded quantization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1677-1687. [PMID: 18267446 DOI: 10.1109/83.806615] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A new form of trellis coded quantization based on uniform quantization thresholds and "on-the-fly" quantizer training is presented. The universal trellis coded quantization (UTCQ) technique requires neither stored codebooks nor a computationally intense codebook design algorithm. Its performance is comparable with that of fully optimized entropy-constrained trellis coded quantization (ECTCQ) for most encoding rates. The codebook and trellis geometry of UTCQ are symmetric with respect to the trellis superset. This allows sources with a symmetric probability density to be encoded with a single variable-rate code. Rate allocation and quantizer modeling procedures are given for UTCQ which allow access to continuous quantization rates. An image coding application based on adaptive wavelet coefficient subblock classification, arithmetic coding, and UTCQ is presented. The excellent performance of this coder demonstrates the efficacy of UTCQ. We also present a simple scheme to improve the perceptual performance of UTCQ for certain imagery at low bit rates. This scheme has the added advantage of being applied during image decoding, without the need to reencode the original image.
Collapse
Affiliation(s)
- J H Kasner
- The Aerospace Corporation, Chantilly, VA 20151-3824, USA
| | | | | |
Collapse
|
24
|
Chai BB, Vass J, Zhuang X. Significance-linked connected component analysis for wavelet image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:774-784. [PMID: 18267492 DOI: 10.1109/83.766856] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.
Collapse
Affiliation(s)
- B B Chai
- Sarnoff Corporation, Princeton, NJ 08543, USA.
| | | | | |
Collapse
|
25
|
Jafarkhani H, Farvardin N. Adaptive image coding using spectral classification. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1998; 7:605-610. [PMID: 18276278 DOI: 10.1109/83.663509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We present a new classification scheme, dubbed spectral classification, which uses the spectral characteristics of the image blocks to classify them into one of a finite number of classes. A vector quantizer with an appropriate distortion measure is designed to perform the classification operation. The application of the proposed spectral classification scheme is then demonstrated in the context of adaptive image coding. It is shown that the spectral classifier outperforms gain-based classifiers while requiring a lower computational complexity.
Collapse
|
26
|
Joshi RL, Jafarkhani H, Kasner JH, Fischer TR, Farvardin N, Marcellin MW, Bamberger RH. Comparison of different methods of classification in subband coding of images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1997; 6:1473-1486. [PMID: 18282907 DOI: 10.1109/83.641409] [Citation(s) in RCA: 22] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper investigates various classification techniques, applied to subband coding of images, as a way of exploiting the nonstationary nature of image subbands. The advantages of subband classification are characterized in a rate-distortion framework in terms of "classification gain" and overall "subband classification gain." Two algorithms, maximum classification gain and equal mean-normalized standard deviation classification, which allow unequal number of blocks in each class, are presented. The dependence between the classification maps from different subbands is exploited either directly while encoding the classification maps or indirectly by constraining the classification maps. The trade-off between the classification gain and the amount of side information is explored. Coding results for a subband image coder based on classification are presented. The simulation results demonstrate the value of classification in subband coding.
Collapse
Affiliation(s)
- R L Joshi
- Sch. of Electr. Eng. and Comput. Sci., Washington State Univ., Pullman, WA
| | | | | | | | | | | | | |
Collapse
|
27
|
Xiong Z, Ramchandran K, Orchard MT. Space-frequency quantization for wavelet image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1997; 6:677-693. [PMID: 18282961 DOI: 10.1109/83.568925] [Citation(s) in RCA: 38] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
A new class of image coding algorithms coupling standard scalar quantization of frequency coefficients with tree-structured quantization (related to spatial structures) has attracted wide attention because its good performance appears to confirm the promised efficiencies of hierarchical representation. This paper addresses the problem of how spatial quantization modes and standard scalar quantization can be applied in a jointly optimal fashion in an image coder. We consider zerotree quantization (zeroing out tree-structured sets of wavelet coefficients) and the simplest form of scalar quantization (a single common uniform scalar quantizer applied to all nonzeroed coefficients), and we formalize the problem of optimizing their joint application. We develop an image coding algorithm for solving the resulting optimization problem. Despite the basic form of the two quantizers considered, the resulting algorithm demonstrates coding performance that is competitive, often outperforming the very best coding algorithms in the literature.
Collapse
Affiliation(s)
- Z Xiong
- Dept. of Electr. Eng., Princeton Univ., NJ
| | | | | |
Collapse
|