|
|
|
|
|
Set Partition Coding: Part I of Set Partition Coding and Image Wavelet Coding Systems
Author(s):
Source: Journal:Foundations and Trends® in Signal Processing ISSN Print:1932-8346, ISSN Online:1932-8354 Publisher:Now Publishers Volume 2 Number 2, Pages: 86 (95-180) DOI: 10.1561/2000000013
Abstract: The purpose of this two-part monograph is to present a tutorial on set partition coding, with emphasis and examples on image wavelet transform coding systems, and describe their use in modern image coding systems. Set partition coding is a procedure that recursively splits groups of integer data or transform elements guided by a sequence of threshold tests, producing groups of elements whose magnitudes are between two known thresholds, therefore, setting the maximum number of bits required for their binary representation. It produces groups of elements whose magnitudes are less than a certain known threshold. Therefore, the number of bits for representing an element in a particular group is no more than the base-2 logarithm of its threshold rounded up to the nearest integer. SPIHT (Set Partitioning in Hierarchical Trees) and SPECK (Set Partitioning Embedded blocK) are popular state-of-the-art image coders that use set partition coding as the primary entropy coding method. JPEG2000 and EZW (Embedded Zerotree Wavelet) use it in an auxiliary manner. Part I elucidates the fundamentals of set partition coding and explains the setting of thresholds and the block and tree modes of partitioning. Algorithms are presented for the techniques of AGP (Amplitude and Group Partitioning), SPIHT, SPECK, and EZW. Numerical examples are worked out in detail for the latter three techniques. Part II describes various wavelet image coding systems that use set partitioning primarily, such as SBHP (Subband Block Hierarchical Partitioning), SPIHT, and EZBC (Embedded Zero-Block Coder). The basic JPEG2000 coder is also described. The coding procedures and the specific methods are presented both logically and in algorithmic form, where possible. Besides the obvious objective of obtaining small file sizes, much emphasis is placed on achieving low computational complexity and desirable output bitstream attributes, such as embeddedness, scalability in resolution, and random access decodability.
This monograph is extracted and adapted from the forthcoming textbook entitled Digital Signal Compression: Principles and Practice by William A. Pearlman and Amir Said, Cambridge University Press, 2009.
|
|
|
|