1
|
Zadeh PB, Akbari AS, Buggy T. DCT image codec using variance of sub-regions. OPEN COMPUTER SCIENCE 2015. [DOI: 10.1515/comp-2015-0003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
AbstractThis paper presents a novel variance of subregions
and discrete cosine transform based image-coding
scheme. The proposed encoder divides the input image
into a number of non-overlapping blocks. The coefficients
in each block are then transformed into their spatial frequencies
using a discrete cosine transform. Coefficients
with the same spatial frequency index at different blocks
are put together generating a number of matrices, where
each matrix contains coefficients of a particular spatial
frequency index. The matrix containing DC coefficients is
losslessly coded to preserve its visually important information.
Matrices containing high frequency coefficients
are coded using a variance of sub-regions based encoding
algorithm proposed in this paper. Perceptual weights
are used to regulate the threshold value required in the
coding process of the high frequency matrices. An extension
of the system to the progressive image transmission
is also developed. The proposed coding scheme, JPEG and
JPEG2000were applied to a number of test images. Results
show that the proposed coding scheme outperforms JPEG
and JPEG2000 subjectively and objectively at low compression
ratios. Results also indicate that the proposed codec
decoded images exhibit superior subjective quality at high
compression ratios compared to that of JPEG, while offering
satisfactory results to that of JPEG2000.
Collapse
Affiliation(s)
- Pooneh Bagheri Zadeh
- 1Bagheri Zadeh School of Computer Science and Informatics, De Montfort University, U.K
| | - Akbar Sheikh Akbari
- 2School of Computing, Creative Technology and Engineering, Faculty of Arts, Environment and Technology, Leeds Beckett University, U.K
| | - Tom Buggy
- 3Division of Communication, Network and Electronic Engineering, School of Engineering & Computing, Glasgow Caledonian University, 70 Cowcaddens Road, Glasgow G4 0BA, UK
| |
Collapse
|
2
|
Sezer OG, Guleryuz OG, Altunbasak Y. Approximation and compression with sparse orthonormal transforms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:2328-2343. [PMID: 25823033 DOI: 10.1109/tip.2015.2414879] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We propose a new transform design method that targets the generation of compression-optimized transforms for next-generation multimedia applications. The fundamental idea behind transform compression is to exploit regularity within signals such that redundancy is minimized subject to a fidelity cost. Multimedia signals, in particular images and video, are well known to contain a diverse set of localized structures, leading to many different types of regularity and to nonstationary signal statistics. The proposed method designs sparse orthonormal transforms (SOTs) that automatically exploit regularity over different signal structures and provides an adaptation method that determines the best representation over localized regions. Unlike earlier work that is motivated by linear approximation constructs and model-based designs that are limited to specific types of signal regularity, our work uses general nonlinear approximation ideas and a data-driven setup to significantly broaden its reach. We show that our SOT designs provide a safe and principled extension of the Karhunen-Loeve transform (KLT) by reducing to the KLT on Gaussian processes and by automatically exploiting non-Gaussian statistics to significantly improve over the KLT on more general processes. We provide an algebraic optimization framework that generates optimized designs for any desired transform structure (multiresolution, block, lapped, and so on) with significantly better n -term approximation performance. For each structure, we propose a new prototype codec and test over a database of images. Simulation results show consistent increase in compression and approximation performance compared with conventional methods.
Collapse
|
3
|
Wavelet-based electrocardiogram signal compression methods and their performances: A prospective review. Biomed Signal Process Control 2014. [DOI: 10.1016/j.bspc.2014.07.002] [Citation(s) in RCA: 80] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
4
|
Auli-Llinas F. 2-Step scalar deadzone quantization for bitplane image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2013; 22:4678-4688. [PMID: 23955751 DOI: 10.1109/tip.2013.2277801] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Modern lossy image coding systems generate a quality progressive codestream that, truncated at increasing rates, produces an image with decreasing distortion. Quality progressivity is commonly provided by an embedded quantizer that employs uniform scalar deadzone quantization (USDQ) together with a bitplane coding strategy. This paper introduces a 2-step scalar deadzone quantization (2SDQ) scheme that achieves same coding performance as that of USDQ while reducing the coding passes and the emitted symbols of the bitplane coding engine. This serves to reduce the computational costs of the codec and/or to code high dynamic range images. The main insights behind 2SDQ are the use of two quantization step sizes that approximate wavelet coefficients with more or less precision depending on their density, and a rate-distortion optimization technique that adjusts the distortion decreases produced when coding 2SDQ indexes. The integration of 2SDQ in current codecs is straightforward. The applicability and efficiency of 2SDQ are demonstrated within the framework of JPEG2000.
Collapse
|
5
|
Ates HF, Orchard MT. Spherical coding algorithm for wavelet image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2009; 18:1015-1024. [PMID: 19342336 DOI: 10.1109/tip.2009.2014502] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
In recent literature, there exist many high-performance wavelet coders that use different spatially adaptive coding techniques in order to exploit the spatial energy compaction property of the wavelet transform. Two crucial issues in adaptive methods are the level of flexibility and the coding efficiency achieved while modeling different image regions and allocating bitrate within the wavelet subbands. In this paper, we introduce the "spherical coder," which provides a new adaptive framework for handling these issues in a simple and effective manner. The coder uses local energy as a direct measure to differentiate between parts of the wavelet subband and to decide how to allocate the available bitrate. As local energy becomes available at finer resolutions, i.e., in smaller size windows, the coder automatically updates its decisions about how to spend the bitrate. We use a hierarchical set of variables to specify and code the local energy up to the highest resolution, i.e., the energy of individual wavelet coefficients. The overall scheme is nonredundant, meaning that the subband information is conveyed using this equivalent set of variables without the need for any side parameters. Despite its simplicity, the algorithm produces PSNR results that are competitive with the state-of-art coders in literature.
Collapse
Affiliation(s)
- Hasan F Ates
- Department of Electronics Engineering, Isik University, Sile, Istanbul, Turkey.
| | | |
Collapse
|
6
|
Bernas T, Asem EK, Robinson JP, Rajwa B. Application of wavelet denoising to improve compression efficiency while preserving integrity of digital micrographs. J Microsc 2008; 231:81-96. [PMID: 18638192 DOI: 10.1111/j.1365-2818.2008.02019.x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
Modern microscopy methods require efficient image compression techniques owing to collection of up to thousands of images per experiment. Current irreversible techniques such as JPEG and JPEG2000 are not optimized to preserve the integrity of the scientific data as required by 21 CFR part 11. Therefore, to construct an irreversible, yet integrity-preserving compression mechanism, we establish a model of noise as a function of signal in our imaging system. The noise is then removed with a wavelet shrinkage algorithm whose parameters are adapted to local image structure. We ascertain the integrity of the denoised images by measuring changes in spatial and intensity distributions of registered light in the biological images and estimating changes of the effective microscope MTF. We demonstrate that the proposed denoising procedure leads to a decrease in image file size when a reversible JPEG2000 coding is used and provides better fidelity than irreversible JPEG and JPEG2000 at the same compression ratio. We also demonstrate that denoising reduces image artefacts when used as a pre-filtering step prior to irreversible image coding.
Collapse
Affiliation(s)
- T Bernas
- Department of Plant Anatomy and Cytology, Faculty of Biology and Protection of Environment, University of Silesia, Jagiellonska 28, 40-032 Katowice, Poland.
| | | | | | | |
Collapse
|
7
|
Sigitani T, Iiguni Y, Maeda H. Image interpolation for progressive transmission by using radial basis function networks. ACTA ACUST UNITED AC 2008; 10:381-90. [PMID: 18252534 DOI: 10.1109/72.750567] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This paper investigates the application of a radial basis function network (RBFN) to a hierarchical image coding for progressive transmission. The RBFN is then used to generate an interpolated image from the subsampled version. An efficient method of computing the network parameters is developed for reduction in computational and memory requirements. The coding method does not suffer from problems of blocking effect and can produce the coarsest image quickly. Quantization error effects introduced at one stage are considered in decoding images at the following stages, thus allowing lossless progressive transmission.
Collapse
Affiliation(s)
- T Sigitani
- Department of Communications Engineering, Graduate School of Engineering, Osaka University, Suita, 565-0871, Japan
| | | | | |
Collapse
|
8
|
Chen YY. Medical image compression using DCT-based subband decomposition and modified SPIHT data organization. Int J Med Inform 2007; 76:717-25. [PMID: 16931130 DOI: 10.1016/j.ijmedinf.2006.07.002] [Citation(s) in RCA: 40] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2005] [Revised: 07/03/2006] [Accepted: 07/09/2006] [Indexed: 11/20/2022]
Abstract
OBJECTIVE The work proposed a novel bit-rate-reduced approach for reducing the memory required to store a remote diagnosis and rapidly transmission it. METHOD In the work, an 8x8 Discrete Cosine Transform (DCT) approach is adopted to perform subband decomposition. Modified set partitioning in hierarchical trees (SPIHT) is then employed to organize data and entropy coding. The translation function can store the detailed characteristics of an image. A simple transformation to obtain DCT spectrum data in a single frequency domain decomposes the original signal into various frequency domains that can further compressed by wavelet-based algorithm. In this scheme, insignificant DCT coefficients that correspond to a particular spatial location in the high-frequency subbands can be employed to reduce redundancy by applying a proposed combined function in association with the modified SPIHT. RESULTS AND CONCLUSIONS Simulation results showed that the embedded DCT-CSPIHT image compression reduced the computational complexity to only a quarter of the wavelet-based subband decomposition, and improved the quality of the reconstructed medical image as given by both the peak signal-to-noise ratio (PSNR) and the perceptual results over JPEG2000 and the original SPIHT at the same bit rate. Additionally, since 8x8 fast DCT hardware implementation being commercially available, the proposed DCT-CSPIHT can perform well in high speed image coding and transmission.
Collapse
Affiliation(s)
- Yen-Yu Chen
- Department of Information Management, ChengChou Institute of Technology, 6, Line 2, Sec 3, Shan-Chiao Rd., Yuanlin, Changhwa, Taiwan.
| |
Collapse
|
9
|
Velisavljević V, Beferull-Lozano B, Vetterli M. Space-frequency quantization for image compression with directionlets. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:1761-73. [PMID: 17605375 DOI: 10.1109/tip.2007.899183] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The standard separable 2-D wavelet transform (WT) has recently achieved a great success in image processing because it provides a sparse representation of smooth images. However, it fails to efficiently capture 1-D discontinuities, like edges or contours. These features, being elongated and characterized by geometrical regularity along different directions, intersect and generate many large magnitude wavelet coefficients. Since contours are very important elements in the visual perception of images, to provide a good visual quality of compressed images, it is fundamental to preserve good reconstruction of these directional features. In our previous work, we proposed a construction of critically sampled perfect reconstruction transforms with directional vanishing moments imposed in the corresponding basis functions along different directions, called directionlets. In this paper, we show how to design and implement a novel efficient space-frequency quantization (SFQ) compression algorithm using directionlets. Our new compression method outperforms the standard SFQ in a rate-distortion sense, both in terms of mean-square error and visual quality, especially in the low-rate compression regime. We also show that our compression method, does not increase the order of computational complexity as compared to the standard SFQ algorithm.
Collapse
|
10
|
Eslami R, Radha H. A new family of nonredundant transforms using hybrid wavelets and directional filter banks. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:1152-67. [PMID: 17405445 DOI: 10.1109/tip.2007.891791] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
We propose a new family of nonredundant geometrical image transforms that are based on wavelets and directional filter banks. We convert the wavelet basis functions in the finest scales to a flexible and rich set of directional basis elements by employing directional filter banks, where we form a nonredundant transform family, which exhibits both directional and nondirectional basis functions. We demonstrate the potential of the proposed transforms using nonlinear approximation. In addition, we employ the proposed family in two key image processing applications, image coding and denoising, and show its efficiency for these applications.
Collapse
Affiliation(s)
- Ramin Eslami
- Department of Electrical and Computer Engineering, McMaster University, Hamilton, ON L8S 4K1, Canada.
| | | |
Collapse
|
11
|
Gaubatz MD, Hemami SS. Ordering for embedded coding of wavelet image data based on arbitrary scalar quantization schemes. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:982-96. [PMID: 17405431 DOI: 10.1109/tip.2007.891793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Many modern wavelet quantization schemes specify wavelet coefficient step sizes as continuous functions of an input step-size selection criterion; rate control is achieved by selecting an appropriate set of step sizes. In embedded wavelet coders, however, rate control is achieved simply by truncating the coded bit stream at the desired rate. The order in which wavelet data are coded implicitly controls quantization step sizes applied to create the reconstructed image. Since these step sizes are effectively discontinuous, piecewise-constant functions of rate, this paper examines the problem of designing a coding order for such a coder, guided by a quantization scheme where step sizes evolve continuously with rate. In particular, it formulates an optimization problem that minimizes the average relative difference between the piecewise-constant implicit step sizes associated with a layered coding strategy and the smooth step sizes given by a quantization scheme. The solution to this problem implies a coding order. Elegant, near-optimal solutions are presented to optimize step sizes over a variety of regions of rates, either continuous or discrete. This method can be used to create layers of coded data using any scalar quantization scheme combined with any wavelet bit-plane coder. It is illustrated using a variety of state-of-the-art coders and quantization schemes. In addition, the proposed method is verified with objective and subjective testing.
Collapse
Affiliation(s)
- Matthew D Gaubatz
- Department of Electrical and Computer Engineering, Cornell University, Ithaca, NY 14853, USA.
| | | |
Collapse
|
12
|
Liu Y, Oraintara S. Feature-oriented multiple description wavelet-based image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2007; 16:121-31. [PMID: 17283771 DOI: 10.1109/tip.2006.884935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
We address the problem of resilient image coding over error-prone networks where packet losses occur. Recent literature highlights the multiple description coding (MDC) as a promising approach to solve this problem. In this paper, we introduce a novel wavelet-based multiple description image coder, referred to as the feature-oriented MDC (FO-MDC). The proposed multiple description (MD) coder exploits the statistics of the wavelet coefficients and identifies the subsets of samples that are sensitive to packet loss. A joint optimization between tree-pruning and quantizer selection in the rate-distortion sense is used in order to allocate more bits to these sensitive coefficients. When compared with the state-of-the-art MD scalar quantization coder, the proposed FO-MDC yields a more efficient central-side distortion tradeoff control mechanism. Furthermore, it proves to be more robust for image transmission even with high packet loss ratios, which makes it suitable for protecting multimedia streams over packet-erasure channels.
Collapse
Affiliation(s)
- Yilong Liu
- Department of Electrical Engineering, University of Texas at Arlington 76019-0016, USA.
| | | |
Collapse
|
13
|
Wang D, Zhang L, Vincent A, Speranza F. Curved wavelet transform for image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:2413-21. [PMID: 16900694 DOI: 10.1109/tip.2006.875207] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
The conventional two-dimensional wavelet transform used in existing image coders is usually performed through one-dimensional (1-D) filtering in the vertical and horizontal directions, which cannot efficiently represent edges and lines in images. The curved wavelet transform presented in this paper is carried out by applying 1-D filters along curves, rather than being restricted to vertical and horizontal straight lines. The curves are determined based on image content and are usually parallel to edges and lines in the image to be coded. The pixels along these curves can be well represented by a small number of wavelet coefficients. The curved wavelet transform is used to construct a new image coder. The code-stream syntax of the new coder is the same as that of JPEG2000, except that a new marker segment is added to the tile headers. Results of image coding and subjective quality assessment show that the new image coder performs better than, or as well as, JPEG2000. It is particularly efficient for images that contain sharp edges and can provide a PSNR gain of up to 1.67 dB for natural images compared with JPEG2000.
Collapse
Affiliation(s)
- Demin Wang
- Communications Research Centre Canada, Ottawa, ON K2H 8S2 Canada.
| | | | | | | |
Collapse
|
14
|
Velisavljević V, Beferull-Lozano B, Vetterli M, Dragotti PL. Directionlets: anisotropic multidirectional representation with separable filtering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:1916-33. [PMID: 16830912 DOI: 10.1109/tip.2006.877076] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In spite of the success of the standard wavelet transform (WT) in image processing in recent years, the efficiency of its representation is limited by the spatial isotropy of its basis functions built in the horizontal and vertical directions. One-dimensional (1-D) discontinuities in images (edges and contours) that are very important elements in visual perception, intersect too many wavelet basis functions and lead to a nonsparse representation. To efficiently capture these anisotropic geometrical structures characterized by many more than the horizontal and vertical directions, a more complex multidirectional (M-DIR) and anisotropic transform is required. We present a new lattice-based perfect reconstruction and critically sampled anisotropic M-DIR WT. The transform retains the separable filtering and subsampling and the simplicity of computations and filter design from the standard two-dimensional WT, unlike in the case of some other directional transform constructions (e.g., curvelets, contourlets, or edgelets). The corresponding anisotropic basis unctions (directionlets) have directional vanishing moments along any two directions with rational slopes. Furthermore, we show that this novel transform provides an efficient tool for nonlinear approximation of images, achieving the approximation power O(N(-1.55)), which, while slower than the optimal rate O(N(-2)), is much better than O(N(-1)) achieved with wavelets, but at similar complexity.
Collapse
Affiliation(s)
- Vladan Velisavljević
- School of Computer and Communication Sciences, Swiss Federal Institute of Technology Lausanne (EPFL), Switzerland.
| | | | | | | |
Collapse
|
15
|
Chang H, Chen J, Ho YP. Batch Process Monitoring by Wavelet Transform Based Fractal Encoding. Ind Eng Chem Res 2006. [DOI: 10.1021/ie050856i] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Hsuan Chang
- Department of Chemical and Materials Engineering, Tamkang University, 151 Ying-Chuan Road, Tamsui, Taipei, Taiwan 25137, Republic of China
| | - Junghui Chen
- R&D Center for Membrane Technology, Department of Chemical Engineering, Chung-Yuan Christian University, Chung-Li, Taiwan 320, Republic of China
| | - Yun-Peng Ho
- R&D Center for Membrane Technology, Department of Chemical Engineering, Chung-Yuan Christian University, Chung-Li, Taiwan 320, Republic of China
| |
Collapse
|
16
|
Wakin MB, Romberg JK, Choi H, Baraniuk RG. Wavelet-domain approximation and compression of piecewise smooth images. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:1071-87. [PMID: 16671289 DOI: 10.1109/tip.2005.864175] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
The wavelet transform provides a sparse representation for smooth images, enabling efficient approximation and compression using techniques such as zerotrees. Unfortunately, this sparsity does not extend to piecewise smooth images, where edge discontinuities separating smooth regions persist along smooth contours. This lack of sparsity hampers the efficiency of wavelet-based approximation and compression. On the class of images containing smooth C2 regions separated by edges along smooth C2 contours, for example, the asymptotic rate-distortion (R-D) performance of zerotree-based wavelet coding is limited to D(R) (< or = 1/R, well below the optimal rate of 1/R2. In this paper, we develop a geometric modeling framework for wavelets that addresses this shortcoming. The framework can be interpreted either as 1) an extension to the "zerotree model" for wavelet coefficients that explicitly accounts for edge structure at fine scales, or as 2) a new atomic representation that synthesizes images using a sparse combination of wavelets and wedgeprints--anisotropic atoms that are adapted to edge singularities. Our approach enables a new type of quadtree pruning for piecewise smooth images, using zerotrees in uniformly smooth regions and wedgeprints in regions containing geometry. Using this framework, we develop a prototype image coder that has near-optimal asymptotic R-D performance D(R) < or = (log R)2 /R2 for piecewise smooth C2/C2 images. In addition, we extend the algorithm to compress natural images, exploring the practical problems that arise and attaining promising results in terms of mean-square error and visual quality.
Collapse
Affiliation(s)
- Michael B Wakin
- epartment of Electrical and Computer Engineering, Rice University, Houston, TX 77005, USA.
| | | | | | | |
Collapse
|
17
|
Liu Z, Karam LJ. Mutual information-based analysis of JPEG2000 contexts. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:411-422. [PMID: 15825477 DOI: 10.1109/tip.2004.841199] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Context-based arithmetic coding has been widely adopted in image and video compression and is a key component of the new JPEG2000 image compression standard. In this paper, the contexts used in JPEG2000 are analyzed using the mutual information, which is closely related to the compression performance. We first show that, when combining the contexts, the mutual information between the contexts and the encoded data will decrease unless the conditional probability distributions of the combined contexts are the same. Given I, the initial number of contexts, and F, the final desired number of contexts, there are S(I, F) possible context classification schemes where S(I, F) is called the Stirling number of the second kind. The optimal classification scheme is the one that gives the maximum mutual information. Instead of using an exhaustive search, the optimal classification scheme can be obtained through a modified generalized Lloyd algorithm with the relative entropy as the distortion metric. For binary arithmetic coding, the search complexity can be reduced by using dynamic programming. Our experimental results show that the JPEG2000 contexts capture the correlations among the wavelet coefficients very well. At the same time, the number of contexts used as part of the standard can be reduced without loss in the coding performance.
Collapse
Affiliation(s)
- Zhen Liu
- Qualcomm, Inc., San Diego, CA 92121-1714, USA.
| | | |
Collapse
|
18
|
Kaur L, Chauhan RC, Saxena SC. Space-frequency quantiser design for ultrasound image compression based on minimum description length criterion. Med Biol Eng Comput 2005; 43:33-9. [PMID: 15742717 DOI: 10.1007/bf02345120] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
The paper addresses the problem of how the spatial quantisation mode and subband adaptive uniform scalar quantiser can be jointly optimised in the minimum description length (MDL) framework for compression of ultrasound images. It has been shown that the statistics of wavelet coefficients in the medical ultrasound (US) image can be better approximated by the generalised Student t-distribution. By combining these statistics with the operational rate-distortion (RD) criterion, a space-frequency quantiser (SFQ) called the MDL-SFQ was designed, which used an efficient zero-tree quantisation technique for zeroing out the tree-structured sets of wavelet coefficients and an adaptive scalar quantiser to quantise the non-zero coefficients. The algorithm used the statistical 'variance of quantisation error' to achieve the different bit-rates ranging from near-lossless to lossy compression. Experimental results showed that the proposed coder outperformed the set partitioning in hierarchical trees (SPIHT) image coder both quantitatively and qualitatively. It yielded an improved compression performance of 1.01 dB over the best zero-tree based coder SPIHIT at 0.25 bits per pixel when averaged over five ultrasound images.
Collapse
Affiliation(s)
- L Kaur
- Sant Longowal Institute of Engineering & Technology, Longowal, India.
| | | | | |
Collapse
|
19
|
Su CY, Wu BF. A low memory zerotree coding for arbitrarily shaped objects. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:271-282. [PMID: 18237907 DOI: 10.1109/tip.2002.807359] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The set partitioning in hierarchical trees (SPIHT) algorithm is a computationally simple and efficient zerotree coding technique for image compression. However, the high working memory requirement is its main drawback for hardware realization. We present a low memory zerotree coder (LMZC), which requires much less working memory than SPIHT. The LMZC coding algorithm abandons the use of lists, defines a different tree structure, and merges the sorting pass and the refinement pass together. The main techniques of LMZC are the recursive programming and a top-bit scheme (TBS). In TBS, the top bits of transformed coefficients are used to store the coding status of coefficients instead of the lists used in SPIHT. In order to achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. A compact emplacement of the transformed coefficients is also proposed to further reduce working memory. The LMZC carefully treats "don't care" nodes in the wavelet tree and does not use bits to code such nodes. Comparison of LMZC with SPIHT shows that for coding a 768 /spl times/ 512 color image, LMZC saves at least 5.3 MBytes of memory but only increases a little execution time and reduces minor peak signal-to noise ratio (PSNR) values, thereby making it highly promising for some memory limited applications.
Collapse
Affiliation(s)
- Chorng-Yann Su
- Dept. of Ind. Educ., Nat. Taiwan Normal Univ., Hsinchu, Taiwan.
| | | |
Collapse
|
20
|
Li X. On exploiting geometric constraint of image wavelet coefficients. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:1378-1387. [PMID: 18244695 DOI: 10.1109/tip.2003.818011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, we investigate the problem of how to exploit geometric constraint of edges in wavelet-based image coding.The value of studying this problem is the potential coding gain brought by improved probabilistic models of wavelet high-band coefficients. Novel phase shifting and prediction algorithms are derived in the wavelet space. It is demonstrated that after resolving the phase uncertainty, high-band wavelet coefficients can be better modeled by biased-mean probability models rather than the existing zero-mean ones. In lossy coding, the coding gain brought by the biased-mean model is quantitatively analyzed within the conventional DPCM coding framework. Experiment results have shown the proposed phase shifting and prediction scheme improves both subjective and objective performance of wavelet-based image coders.
Collapse
Affiliation(s)
- Xin Li
- Lane Dept. of Comput. Sci. and Electr. Eng., West Virginia Univ., Morgantown, WV 26506-6109, USA.
| |
Collapse
|
21
|
Rajpoot NM, Wilson RG, Meyer FG, Coifman RR. Adaptive wavelet packet basis selection for zerotree image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:1460-1472. [PMID: 18244702 DOI: 10.1109/tip.2003.818115] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Image coding methods based on adaptive wavelet transforms and those employing zerotree quantization have been shown to be successful. We present a general zerotree structure for an arbitrary wavelet packet geometry in an image coding framework. A fast basis selection algorithm is developed; it uses a Markov chain based cost estimate of encoding the image using this structure. As a result, our adaptive wavelet zerotree image coder has a relatively low computational complexity, performs comparably to state-of-the-art image coders, and is capable of progressively encoding images.
Collapse
Affiliation(s)
- Nasir M Rajpoot
- Department of Computer Science, University ofWarwick, Coventry CV4 7AL, UK.
| | | | | | | |
Collapse
|
22
|
Deever AT, Hemami SS. Efficient sign coding and estimation of zero-quantized coefficients in embedded wavelet image codecs. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2003; 12:420-430. [PMID: 18237920 DOI: 10.1109/tip.2003.811499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Wavelet transform coefficients are defined by both a magnitude and a sign. While efficient algorithms exist for coding the transform coefficient magnitudes, current wavelet image coding algorithms are not as efficient at coding the sign of the transform coefficients. It is generally assumed that there is no compression gain to be obtained from entropy coding of the sign. Only recently have some authors begun to investigate this component of wavelet image coding. In this paper, sign coding is examined in detail in the context of an embedded wavelet image coder. In addition to using intraband wavelet coefficients in a sign coding context model, a projection technique is described that allows nonintraband wavelet coefficients to be incorporated into the context model. At the decoder, accumulated sign prediction statistics are also used to derive improved reconstruction estimates for zero-quantized coefficients. These techniques are shown to yield PSNR improvements averaging 0.3 dB, and are applicable to any genre of embedded wavelet image codec.
Collapse
|
23
|
Hsieh MS, Tseng DC. Image subband coding using fuzzy inference and adaptive quantization. IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS. PART B, CYBERNETICS : A PUBLICATION OF THE IEEE SYSTEMS, MAN, AND CYBERNETICS SOCIETY 2003; 33:509-513. [PMID: 18238197 DOI: 10.1109/tsmcb.2003.811131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Wavelet image decomposition generates a hierarchical data structure to represent an image. Recently, a new class of image compression algorithms has been developed for exploiting dependencies between the hierarchical wavelet coefficients using zerotrees. This paper deals with a fuzzy inference filter for image entropy coding by choosing significant coefficients and zerotree roots in the higher frequency wavelet subbands. Moreover, an adaptive quantization is proposed to improve the coding performance. Evaluating with the standard images, the proposed approaches are comparable or superior to most state-of-the-art coders. Based on the fuzzy energy judgment, the proposed approaches can achieve an excellent performance on the combination applications of image compression and watermarking.
Collapse
Affiliation(s)
- Ming-Shing Hsieh
- Inst. of Comput. Sci. & Inf. Eng., Nat. Central Univ., Chung-li, Taiwan
| | | |
Collapse
|
24
|
Sun Y, Zhang H, Hu G. Real-time implementation of a new low-memory SPIHT image coding algorithm using DSP chip. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2002; 11:1112-1116. [PMID: 18249732 DOI: 10.1109/tip.2002.802533] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Among all algorithms based on wavelet transform and zerotree quantization, Said and Pearlman's (1996) set partitioning in hierarchical trees (SPIHT) algorithm is well-known for its simplicity and efficiency. This paper deals with the real-time implementation of SPIHT algorithm using DSP chip. In order to facilitate the implementation and improve the codec's performance, some relative issues are thoroughly discussed, such as the optimization of program structure to speed up the wavelet decomposition. SPIHT's high memory requirement is a major drawback for hardware implementation. In this paper, we modify the original SPIHT algorithm by presenting two new concepts-number of error bits and absolute zerotree. Consequently, the memory cost is significantly reduced. We also introduce a new method to control the coding process by number of error bits. Our experimental results show that the implementation meets common requirement of real-time video coding and is proven to be a practical and efficient DSP solution.
Collapse
Affiliation(s)
- Yong Sun
- Dept. of Electr. Eng., Tsinghua Univ., Beijing, China
| | | | | |
Collapse
|
25
|
Reichel J, Menegaz G, Nadenau MJ, Kunt M. Integer wavelet transform for embedded lossy to lossless image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:383-392. [PMID: 18249628 DOI: 10.1109/83.908504] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
The use of the discrete wavelet transform (DWT) for embedded lossy image compression is now well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is one of them. This is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic is investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations are modeled as additive noise. The noise are then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology is verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality. This allows to a better understanding of the impact of the IWT when applied to lossy image compression.
Collapse
Affiliation(s)
- J Reichel
- Signal Processing Laboratory, Swiss Federal Institute of Technology, 1015 Lausanne, Switzerland.
| | | | | | | |
Collapse
|
26
|
Berghorn W, Boskamp T, Lang M, Peitgen HO. Fast variable run-length coding for embedded progressive wavelet-based image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:1781-1790. [PMID: 18255518 DOI: 10.1109/83.974563] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Run-length coding has attracted much attention in wavelet-based image compression because of its simplicity and potentially low complexity. The main drawback is the inferior RD-performance compared to the state-of-the-art-coder SPIHT. In this paper, we concentrate on the embedded progressive run-length code of Tian and Wells (1996, 1998). We consider significance sequences drawn from the scan in the dominant pass. It turns out that self-similar curves for scanning the dominant pass increase the compression efficiency significantly. This is a consequence of the correlation of direct neighbors in the wavelet domain. This dependence can be better exploited by using groups of coefficients, similar to the SPIHT algorithm. This results in a new and very fast coding algorithm, which shows performance similar to the state-of-the-art coder SPIHT, but with lower complexity and small and fixed memory overhead.
Collapse
Affiliation(s)
- W Berghorn
- Center for Med. Diagnostic Syst. and Visualization, Bremen, Germany.
| | | | | | | |
Collapse
|
27
|
Shi Z, Wei GW, Kouri DJ, Hoffman DK, Bao Z. Lagrange wavelets for signal processing. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:1488-1508. [PMID: 18255493 DOI: 10.1109/83.951535] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper deals with the design of interpolating wavelets based on a variety of Lagrange functions, combined with novel signal processing techniques for digital imaging. Halfband Lagrange wavelets, B-spline Lagrange wavelets and Gaussian Lagrange (Lagrange distributed approximating functional (DAF)) wavelets are presented as specific examples of the generalized Lagrange wavelets. Our approach combines the perceptually dependent visual group normalization (VGN) technique and a softer logic masking (SLM) method. These are utilized to rescale the wavelet coefficients, remove perceptual redundancy and obtain good visual performance for digital image processing.
Collapse
Affiliation(s)
- Z Shi
- Dept. of Phys., University of Houston, Houston, TX 77204, USA.
| | | | | | | | | |
Collapse
|
28
|
Martin MB, Bell AE. New image compression techniques using multiwavelets and multiwavelet packets. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2001; 10:500-510. [PMID: 18249640 DOI: 10.1109/83.913585] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Advances in wavelet transforms and quantization methods have produced algorithms capable of surpassing the existing image compression standards like the Joint Photographic Experts Group (JPEG) algorithm. For best performance in image compression, wavelet transforms require filters that combine a number of desirable properties, such as orthogonality and symmetry. However, the design possibilities for wavelets are limited because they cannot simultaneously possess all of the desirable properties. The relatively new field of multiwavelets shows promise in obviating some of the limitations of wavelets. Multiwavelets offer more design options and are able to combine several desirable transform features. The few previously published results of multiwavelet-based image compression have mostly fallen short of the performance enjoyed by the current wavelet algorithms. This paper presents new multiwavelet transform and quantization methods and introduces multiwavelet packets. Extensive experimental results demonstrate that our techniques exhibit performance equal to, or in several cases superior to, the current wavelet filters.
Collapse
Affiliation(s)
- M B Martin
- Vision III Imaging, Herndon, VA 20170, USA
| | | |
Collapse
|
29
|
Servetto SD, Ramchandran K, Vaishampayan VA, Nahrstedt K. Multiple description wavelet based image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:813-826. [PMID: 18255453 DOI: 10.1109/83.841528] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We consider the problem of coding images for transmission over error-prone channels. The impairments we target are transient channel shutdowns, as would occur in a packet network when a packet is lost, or in a wireless system during a deep fade: when data is delivered it is assumed to be error-free, but some of the data may never reach the receiver. The proposed algorithms are based on a combination of multiple description scalar quantizers with techniques successfully applied to the construction of some of the most efficient subband coders. A given image is encoded into multiple independent packets of roughly equal length. When packets are lost, the quality of the approximation computed at the receiver depends only on the number of packets received, but does not depend on exactly which packets are actually received. When compared with previously reported results on the performance of robust image coders based on multiple descriptions, on standard test images, our coders attain similar PSNR values using typically about 50-60% of the bit rate required by these other state-of-the-art coders, while at the same time providing significantly more freedom in the mechanism for allocation of redundancy among descriptions.
Collapse
Affiliation(s)
- S D Servetto
- Laboratoire de Communications Audiovisuelles, Ecole Polytechnique Fédérale de Lausanne, 1015 Lausanne, Switzerland.
| | | | | | | |
Collapse
|
30
|
Yang X, Ramchandran K. Scalable wavelet video coding using aliasing-reduced hierarchical motion compensation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:778-791. [PMID: 18255450 DOI: 10.1109/83.841519] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We describe a spatially scalable video coding framework in which motion correspondences between successive video frames are exploited in the wavelet transform domain. The basic motivation for our coder is that motion fields are typically smooth and, therefore, can be efficiently captured through a multiresolutional framework. A wavelet decomposition is applied to each video frame and the coefficients at each level are predicted from the coarser level through backward motion compensation. To remove the aliasing effects caused by downsampling in the transform, a special interpolation filter is designed with the weighted aliasing energy as part of the optimization goal, and motion estimation is carried out with low pass filtering and interpolation in the estimation loop. Further, to achieve robust motion estimation against quantization noise, we propose a novel backward/forward hybrid motion compensation scheme, and a tree structured dynamic programming algorithm to optimize the backward/forward mode choices. A novel adaptive quantization scheme is applied to code the motion predicted residue wavelet coefficients, Experimental results reveal 0.3-2-dB increase in coded PSNR at low bit rates over the state-of-the-art H.263 standard with all enhancement modes enabled, and similar improvements over MPEG-2 at high bit rates, with a considerable improvement in subjective reconstruction quality, while simultaneously supporting a scalable representation.
Collapse
Affiliation(s)
- X Yang
- Imaging Technologies Department, Hewlett-Packard Laboratories, Palo Alto, CA 94304, USA.
| | | |
Collapse
|
31
|
Chrysafis C, Ortega A. Line-based, reduced memory, wavelet image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2000; 9:378-389. [PMID: 18255410 DOI: 10.1109/83.826776] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper addresses the problem of low memory wavelet image compression. While wavelet or subband coding of images has been shown to be superior to more traditional transform coding techniques, little attention has been paid until recently to the important issue of whether both the wavelet transforms and the subsequent coding can be implemented in low memory without significant loss in performance. We present a complete system to perform low memory wavelet image coding. Our approach is "line-based" in that the images are read line by line and only the minimum required number of lines is kept in memory. There are two main contributions of our work. First, we introduce a line-based approach for the implementation of the wavelet transform, which yields the same results as a "normal" implementation, but where, unlike prior work, we address memory issues arising from the need to synchronize encoder and decoder. Second, we propose a novel context-based encoder which requires no global information and stores only a local set of wavelet coefficients. This low memory coder achieves performance comparable to state of the art coders at a fraction of their memory utilization.
Collapse
Affiliation(s)
- C Chrysafis
- Hewlett-Packard Laboratories, Palo Alto, CA 94304, USA.
| | | |
Collapse
|
32
|
Munteanu A, Cornelis J, Van der Auwera G, Cristea P. Wavelet image compression--the quadtree coding approach. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE : A PUBLICATION OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 1999; 3:176-85. [PMID: 10719481 DOI: 10.1109/4233.788579] [Citation(s) in RCA: 75] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Perfect reconstruction, quality scalability, and region-of-interest coding are basic features needed for the image compression schemes used in telemedicine applications. This paper proposes a new wavelet-based embedded compression technique that efficiently exploits the intraband dependencies and uses a quadtree-based approach to encode the significance maps. The algorithm produces a losslessly compressed embedded data stream, supports quality scalability, and permits region-of-interest coding. Moreover, experimental results obtained on various images show that the proposed algorithm provides competitive lossless/lossy compression results. The proposed technique is well suited for telemedicine applications that require fast interactive handling of large image sets, over networks with limited and/or variable bandwidth.
Collapse
Affiliation(s)
- A Munteanu
- Electronics and Information Processing Department, Vrije Universiteit Brussel, Belgium
| | | | | | | |
Collapse
|
33
|
Li X, Hu G, Gao S. Design and implementation of a novel compression method in a tele-ultrasound system. IEEE TRANSACTIONS ON INFORMATION TECHNOLOGY IN BIOMEDICINE : A PUBLICATION OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY 1999; 3:205-13. [PMID: 10719484 DOI: 10.1109/4233.788582] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper represents a novel compression method for ultrasound images in a tele-ultrasound system. The encoder chooses the ultrasound scan line signals as the object to be compressed and transmitted other than the standard video images in conventional methods. Furthermore, nonuniform discrete wavelet transform is proposed considering the different resolutions in axial and lateral directions of ultrasound images. Experimental results show that the new method provides better compression performance than conventional methods in terms of a peak signal-to-noise ratio and processing time.
Collapse
Affiliation(s)
- X Li
- Department of Electrical Engineering, Tsinghua University, Beijing, PR China
| | | | | |
Collapse
|
34
|
Phelan NC, Ennis JT. Medical image compression based on a morphological representation of wavelet coefficients. Med Phys 1999; 26:1607-11. [PMID: 10501061 DOI: 10.1118/1.598655] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
Image compression is fundamental to the efficient and cost-effective use of digital medical imaging technology and applications. Wavelet transform techniques currently provide the most promising approach to high-quality image compression which is essential for diagnostic medical applications. A novel approach to image compression based on the wavelet decomposition has been developed which utilizes the shape or morphology of wavelet transform coefficients in the wavelet domain to isolate and retain significant coefficients corresponding to image structure and features. The remaining coefficients are further compressed using a combination of run-length and Huffman coding. The technique has been implemented and applied to full 16 bit medical image data for a range of compression ratios. Objective peak signal-to-noise ratio performance of the compression technique was analyzed. Results indicate that good reconstructed image quality can be achieved at compression ratios of up to 15:1 for the image types studied. This technique represents an effective approach to the compression of diagnostic medical images and is worthy of further, more thorough, evaluation of diagnostic quality and accuracy in a clinical setting.
Collapse
Affiliation(s)
- N C Phelan
- Institute of Radiological Sciences, University College Dublin, Mater Hospital, Ireland.
| | | |
Collapse
|
35
|
Zhong JM, Leung CH, Tang YY. Wavelet image coding based on significance extraction using morphological operations. ACTA ACUST UNITED AC 1999. [DOI: 10.1049/ip-vis:19990556] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
36
|
Tran TD, Nguyen TQ. A progressive transmission image coder using linear phase uniform filterbanks as block transforms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1493-1507. [PMID: 18267425 DOI: 10.1109/83.799878] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This paper presents a novel image coding scheme using M-channel linear phase perfect reconstruction filterbanks (LPPRFBs) in the embedded zerotree wavelet (EZW) framework introduced by Shapiro (1993). The innovation here is to replace the EZWs dyadic wavelet transform by M-channel uniform-band maximally decimated LPPRFBs, which offer finer frequency spectrum partitioning and higher energy compaction. The transform stage can now be implemented as a block transform which supports parallel processing and facilitates region-of-interest coding/decoding. For hardware implementation, the transform boasts efficient lattice structures, which employ a minimal number of delay elements and are robust under the quantization of lattice coefficients. The resulting compression algorithm also retains all the attractive properties of the EZW coder and its variations such as progressive image transmission, embedded quantization, exact bit rate control, and idempotency. Despite its simplicity, our new coder outperforms some of the best image coders published previously in the literature, for almost all test images (especially natural, hard-to-code ones) at almost all bit rates.
Collapse
Affiliation(s)
- T D Tran
- Dept. of Electr. and Comput. Eng., Wisconsin Univ., Madison, WI 53706, USA.
| | | |
Collapse
|
37
|
Servetto SD, Ramchandran K, Orchard MT. Image coding based on a morphological representation of wavelet data. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1161-1174. [PMID: 18267534 DOI: 10.1109/83.784429] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In this paper, an experimental study of the statistical properties of wavelet coefficients of image data is presented, as well as the design of two different morphology-based image coding algorithms that make use of these statistics. A salient feature of the proposed methods is that, by a simple change of quantizers, the same basic algorithm yields high performance embedded or fixed rate coders. Another important feature is that the shape information of morphological sets used in this coder is encoded implicitly by the values of wavelet coefficients, thus avoiding the use of explicit and rate expensive shape descriptors. These proposed algorithms, while achieving nearly the same objective performance of state-of-the-art zerotree based methods, are able to produce reconstructions of a somewhat superior perceptual quality, due to a property of joint compression and noise reduction they exhibit.
Collapse
Affiliation(s)
- S D Servetto
- Dept. of Comput. Sci., Illinois Univ., Urbana, IL 61801, USA.
| | | | | |
Collapse
|
38
|
Yoo Y, Ortega A, Yu B. Image subband coding using context-based classification and adaptive quantization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:1702-1715. [PMID: 18267448 DOI: 10.1109/83.806617] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Adaptive compression methods have been a key component of many proposed subband (or wavelet) image coding techniques. This paper deals with a particular type of adaptive subband image coding where we focus on the image coder's ability to adjust itself "on the fly" to the spatially varying statistical nature of image contents. This backward adaptation is distinguished from more frequently used forward adaptation in that forward adaptation selects the best operating parameters from a predesigned set and thus uses considerable amount of side information in order for the encoder and the decoder to operate with the same parameters. Specifically, we present backward adaptive quantization using a new context-based classification technique which classifies each subband coefficient based on the surrounding quantized coefficients. We couple this classification with online parametric adaptation of the quantizer applied to each class. A simple uniform threshold quantizer is employed as the baseline quantizer for which adaptation is achieved. Our subband image coder based on the proposed adaptive classification quantization idea exhibits excellent rate-distortion performance, in particular at very low rates. For popular test images, it is comparable or superior to most of the state-of-the-art coders in the literature.
Collapse
Affiliation(s)
- Y Yoo
- Media Technologies Laboratory, DSP Solutions R&D Center, Texas Instruments Inc., Dallas, TX 75243, USA.
| | | | | |
Collapse
|
39
|
Chai BB, Vass J, Zhuang X. Significance-linked connected component analysis for wavelet image coding. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1999; 8:774-784. [PMID: 18267492 DOI: 10.1109/83.766856] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Recent success in wavelet image coding is mainly attributed to a recognition of the importance of data organization and representation. There have been several very competitive wavelet coders developed, namely, Shapiro's (1993) embedded zerotree wavelets (EZW), Servetto et al.'s (1995) morphological representation of wavelet data (MRWD), and Said and Pearlman's (see IEEE Trans. Circuits Syst. Video Technol., vol.6, p.245-50, 1996) set partitioning in hierarchical trees (SPIHT). We develop a novel wavelet image coder called significance-linked connected component analysis (SLCCA) of wavelet coefficients that extends MRWD by exploiting both within-subband clustering of significant coefficients and cross-subband dependency in significant fields. Extensive computer experiments on both natural and texture images show convincingly that the proposed SLCCA outperforms EZW, MRWD, and SPIHT. For example, for the Barbara image, at 0.25 b/pixel, SLCCA outperforms EZW, MRWD, and SPIHT by 1.41 dB, 0.32 dB, and 0.60 dB in PSNR, respectively. It is also observed that SLCCA works extremely well for images with a large portion of texture. For eight typical 256x256 grayscale texture images compressed at 0.40 b/pixel, SLCCA outperforms SPIHT by 0.16 dB-0.63 dB in PSNR. This performance is achieved without using any optimal bit allocation procedure. Thus both the encoding and decoding procedures are fast.
Collapse
Affiliation(s)
- B B Chai
- Sarnoff Corporation, Princeton, NJ 08543, USA.
| | | | | |
Collapse
|
40
|
Davis GM. A wavelet-based analysis of fractal image compression. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1998; 7:141-154. [PMID: 18267389 DOI: 10.1109/83.660992] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Why does fractal image compression work? What is the implicit image model underlying fractal block coding? How can we characterize the types of images for which fractal block coders will work well? These are the central issues we address. We introduce a new wavelet-based framework for analyzing block-based fractal compression schemes. Within this framework we are able to draw upon insights from the well-established transform coder paradigm in order to address the issue of why fractal block coders work. We show that fractal block coders of the form introduced by Jacquin (1992) are Haar wavelet subtree quantization schemes. We examine a generalization of the schemes to smooth wavelets with additional vanishing moments. The performance of our generalized coder is comparable to the best results in the literature for a Jacquin-style coding scheme. Our wavelet framework gives new insight into the convergence properties of fractal block coders, and it leads us to develop an unconditionally convergent scheme with a fast decoding algorithm. Our experiments with this new algorithm indicate that fractal coders derive much of their effectiveness from their ability to efficiently represent wavelet zero trees. Finally, our framework reveals some of the fundamental limitations of current fractal compression schemes.
Collapse
Affiliation(s)
- G M Davis
- Mathematics Department, Dartmouth College, Hanover, NH 03755, USA.
| |
Collapse
|
41
|
Xiong Z, Ramchandran K, Orchard MT. Wavelet packet image coding using space-frequency quantization. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 1998; 7:892-898. [PMID: 18276302 DOI: 10.1109/83.679438] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
We extend our previous work on space-frequency quantization (SFQ) for image coding from wavelet transforms to the more general wavelet packet transforms. The resulting wavelet packet coder offers a universal transform coding framework within the constraints of filterbank structures by allowing joint transform and quantizer design without assuming a priori statistics of the input image. In other words, the new coder adaptively chooses the representation to suit the image and the quantization to suit the representation. Experimental results show that, for some image classes, our new coder gives excellent coding performance.
Collapse
Affiliation(s)
- Z Xiong
- Department of Electrical Engineering, University of Hawaii, Honolulu, HI 96822, USA
| | | | | |
Collapse
|