Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SEPARATE HUFFMAN CODING OF RUNLENGTH AND SIZE DATA OF DCT COEFFICIENTS
Document Type and Number:
WIPO Patent Application WO/2009/091279
Kind Code:
A1
Abstract:
The present disclosure provides a method for data compression. In one embodiment the method may include converting data from a first color space to a second color space to generate converted data. The method may further include receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient. The method may additionally include receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient.

Inventors:
SERGEEV ANTON VALERIEVICH (RU)
TURLIKOV ANDREY MIKHAILOVICH (RU)
UKHANOVA ANN SERGEEVNA (RU)
Application Number:
PCT/RU2008/000025
Publication Date:
July 23, 2009
Filing Date:
January 18, 2008
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTEL CORP (US)
SERGEEV ANTON VALERIEVICH (RU)
TURLIKOV ANDREY MIKHAILOVICH (RU)
UKHANOVA ANN SERGEEVNA (RU)
International Classes:
H04N7/26; H03M7/42; H03M7/46; H04N7/30
Other References:
WALLACE G K: "THE JPEG STILL PICTURE COMPRESSION STANDARD", IEEE TRANSACTIONS ON CONSUMER ELECTRONICS, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 38, no. 1, 1 February 1992 (1992-02-01), pages XVIII - XXXIV, XP000297354, ISSN: 0098-3063
"DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES PART I: REQUIREMENTS AND GUIDELINES", APPENDIX A. ISO DIS 10918-1. REQUIREMENTS AND GUIDELINES.INTERNATIONAL STANDARD DIS 10918-1. CCITT RECOMMENDATION T.81, XX, XX, 1 January 1993 (1993-01-01), pages 337,339 - 369,371, XP001023580
ABHINAV GUPTA ET AL: "Modified Runlength Coding for Improved JPEG Performance", INFORMATION AND COMMUNICATION TECHNOLOGY, 2007. ICICT '07. INTERN ATIONAL CONFERENCE ON, IEEE, PI, 1 March 2007 (2007-03-01), pages 235 - 237, XP031183552, ISBN: 978-984-32-3394-3
AGOSTINI L V ET AL: "Pipelined entropy coders for JPEG compression", INTEGRATED CIRCUITS AND SYSTEMS DESIGN, 2002. PROCEEDINGS. 15TH SYMPOS IUM ON 9-14, SEPT. 2002, PISCATAWAY, NJ, USA,IEEE, 9 September 2002 (2002-09-09), pages 203 - 208, XP010621790, ISBN: 978-0-7695-1807-7
Attorney, Agent or Firm:
LAW FIRM "GORODISSKY & PARTNERS" LTD. (B. Spasskaya str. 25, stroenie, Moscow 0, RU)
Download PDF:
Claims:

CLAIMS What is claimed is:

1. An apparatus, comprising: an integrated circuit (IC) configured to compress and/or decompress data, said integrated circuit including an entropy encoder configured to receive at least one discrete cosine transform (DCT) coefficient, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a nonzero AC coefficient.

2. The apparatus according to claim 1, wherein said entropy encoder further includes a second Huffman table corresponding to a consecutive number of zero-valued

AC coefficients that precedes said non-zero AC coefficient in a given sequence.

3. The apparatus according to claim 1, wherein said entropy encoder is configured to write at least one zero into an output stream, said at least one zero corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence

4. The apparatus according to claim 1, further comprising a quantizer configured to receive said at least one coefficient from a forward discrete cosine transform, said quantizer further configured to quantize said at least one coefficient and to provide at least one quantized coefficient to said entropy encoder. 5. The apparatus according to claim 4, wherein said forward discrete cosine transform is configured to receive converted data from a converter, said discrete cosine transform being further configured to generate said at least one DCT coefficient and to provide said at least one DCT coefficient to said quantizer.

6. A method for data compression comprising: converting data from a first color space to a second color space to generate converted data; receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient; and receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used

to encode an amplitude of a non-zero AC coefficient.

7. The method according to claim 6, wherein said entropy encoder further includes a second Huffman table corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence. 8. The method according to claim 6, wherein said entropy encoder is configured to write at least one zero into an output stream, said at least one zero corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence

9. The method according to claim 7, further comprising: comparing said at least one DCT coefficient with zero; increasing a counter corresponding to said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence; modifying said counter if said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence is greater than a given threshold and is not an end-of-block (EOB); detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.

10. The method according to claim 8, further comprising: comparing said at least one DCT coefficient with zero; detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.

11. An article comprising a storage medium having stored thereon instructions that when executed by a machine result in the following: converting data from a first color space to a second color space to generate converted data; receiving said converted data at a discrete cosine transform (DCT), said DCT

being configured to generate at least one DCT coefficient; and receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient.

12. The article according to claim H 5 wherein said entropy encoder further, includes a second Huffman table corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence. 13. The article according to claim 11, wherein said entropy encoder is configured to write at least one zero into an output stream, said at least one zero corresponding to a consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence

14. The article according to claim 12, further comprising: comparing said at least one DCT coefficient with zero; increasing a counter corresponding to said consecutive number of zero- valued AC coefficients that precedes said non-zero AC coefficient in a given sequence; modifying said counter if said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence is greater than a given threshold and is not an end-of-block (EOB); detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.

15. The article according to claim 13, further comprising: comparing said at least one DCT coefficient with zero; detecting a non-zero coefficient and obtaining and writing a codeword corresponding to at least one of said consecutive number of zero-valued AC coefficients that precedes said non-zero AC coefficient in a given sequence and said number of bits used to encode an amplitude of a non-zero AC coefficient; and writing said amplitude of said nonzero AC coefficient in binary form.

Description:

LOW COMPLEXITY CODING IN DATA COMPRESSION SYSTEMS

FIELD

The present disclosure describes a method for low complexity coding for use in data compression. BACKGROUND

Data compression may be used to represent a set of information as space efficiently as possible. A data compression code may provide a map between a set of source messages and a set of codewords. Some video compression systems may perform this coding during a limited time period. For example, for High Definition Television (HDTV) 108Oi (1080x1920 30fps) that is encoded by tiles (i.e., rectangular sub-images) of size 8x1920, every tile may be encoded during 497 microseconds. Once a tile has been encoded a successive tile may be present at the input of the compression system and may be ready for processing.

In general, entropy coding refers to a lossless data compression scheme that may be independent of the media's specific characteristics. Typically, entropy coding is one of the most computationally complex elements of a video compression system. In some cases, entropy coding may involve the assignment of codes to symbols so as to match code lengths with the probabilities of the symbols. The design of a data compression system typically involves a tradeoff between the effective compression of input data and minimum complexity costs in the design.

Entropy coding may be realized in modern video compression systems using a variety of different techniques, such as pair Run-Length (RLE) & Variable-Length Coding (VLC) or Arithmetic coding. Arithmetic coding may provide a better compression rate, but may be very complex and may not be a good solution for low- complexity systems. VLC-based techniques may provide a smaller compression rate and tend to be far less complex, thus these techniques may be more suitable for realtime applications.

BRIEF DESCRIPTION OF DRAWINGS

Features and advantages of the claimed subject matter will be apparent from the following detailed description of embodiments consistent therewith, which description should be considered with reference to the accompanying drawings, wherein:

FIG. 1 is a diagram showing an example of discrete cosine transform (DCT)

based encoder processing;

FIG. 2 is a diagram showing an example of DCT-based decoder processing; FIG. 3 is a diagram of an exemplary embodiment of low complexity entropy coding in accordance with the present disclosure; FIG. 4 is a diagram showing a Rate-distortion curve (i.e., compression rate vs.

PSNR) associated with standard JPEG entropy coding as compared with the entropy coding described in accordance with the present disclosure;

FIG. 5 is a diagram showing an estimated probability distribution of RUNLENGTH values in a standard JPEG algorithm; FIG. 6 is a diagram showing an estimated probability distribution of SIZE values in a standard JPEG algorithm;

FIG. 7 is flowchart of operations showing standard entropy coding using the JPEG image compression algorithm;

FIG.8 is a flowchart of operations showing entropy coding using short tables for RUNLENGTH in accordance with one exemplary embodiment of the present disclosure;

FIG. 9 is a flowchart of operations in accordance with another exemplary embodiment of the present disclosure; and

FIG. 10 is a flowchart of operations according to one embodiment of the present disclosure.

Although the following Detailed Description will proceed with reference being made to illustrative embodiments, many alternatives, modifications, and variations thereof will be apparent to those skilled in the art.

DETAILED DESCRIPTION Generally, this disclosure describes a method for low complexity coding for use in data compression. The embodiments described herein may be used in accordance with data compression standards such as the Joint Photographic Experts Group (JPEG) standard, ITU-T T.81, ISO/IEC IS 10918-1, published September 1992. More specifically, the embodiments described herein provide a number of entropy coding algorithms, which may be used to reduce the storage size required for Huffman tables used during a codec program.

Data transmission of still pictures in various fields, such as color facsimile,

desktop publishing, and medical imaging may be performed in accordance with the JPEG standard. The JPEG standard may involve the use of discrete cosine transform (DCT) coding during the data compression process.

Discrete Cosine Transform (DCT) based compression may involve the compression of a stream of 8x8 blocks of grayscale sample images. Each 8x8 block of source image samples is effectively a 64-point discrete signal which may be a function of the two spatial dimensions x and y. The DCT coefficient values may thus be regarded as the relative amount of the 2D spatial frequencies contained in the 64-point input signal. The coefficient with zero frequency in both dimensions is called the "DC coefficient" and the remaining 63 coefficients are called the "AC coefficients." Additional information pertaining to the DCT is described in greater detail in "The JPEG Still Picture Compression Standard", Gregory K. Wallace, IEEE Transactions on Consumer Electronics, 1991.

Referring now to Figure 1, a DCT based encoder 100 is shown. DCT-based encoder 100 may be configured to receive some source image data (e.g., shown as 8x8 blocks in Figure 1) and to output that data in a compressed form. Encoder 100 may include Forward DCT (FDCT) 102, quantizer 104, and entropy encoder 106. In contrast, Figure 2 depicts a DCT-based decoder 200, which receives the compressed image data and outputs the original source image data in a reconstructed form. Decoder 200 may be viewed as a mirror image of encoder 100, as such decoder 200 may include entropy decoder 202, dequantizer 204, and Inverse DCT (IDCT) 206.

Referring again to Figure 1, encoder 100 may be configured to receive source image data 101 for subsequent compression. Source image data 101 may be grouped into 8x8 blocks, shifted from unsigned integers and input to Forward DCT 102. As discussed above, the specific operation of forward DCT 102 is described in greater detail in "The JPEG Still Picture Compression Standard", Gregory K. Wallace, IEEE Transactions on Consumer Electronics, 1991. FDCT 102 may provide an output (e.g., 64 DCT coefficients) to a quantizer 104. In some embodiments, quantizer 104 may be configured to compress a range of values (e.g., the amplitudes of the frequency components) to a single quantum value. In this way, quantizer 104 may be configured to compress the data by representing DCT coefficients with no greater precision than is necessary to achieve the desired image quality. Thus, quantizer 104 may provide a

sequence of quantized DCT coefficients (DC and AC) as an input to entropy encoder 106.

Entropy encoder 106 may be configured to perform the final DCT-based processing operations. Entropy encoder 106 may achieve additional compression losslessly by encoding the quantized DCT coefficients more compactly based on their statistical characteristics. Entropy encoder 106 may be configured to perform a variety of different operations, as discussed in further detail below.

Generally, entropy encoding may include two distinct operations. The first operation may convert a zig-zag sequence of quantized coefficients into an intermediate sequence of symbols. The purpose and operation of the zig-zag sequence is described in further detail in the "JPEG Still Picture Compression Standard" cited above. The second operation may convert the symbols to a data stream in which the symbols no longer have externally identifiable boundaries.

Thus, in some embodiments, encoder 106 may convert the zig-zag sequence of quantized coefficients into an intermediate sequence of signals. Encoder 106 may also convert the symbols to a data stream in which the symbols no longer have externally identifiable boundaries. The form and definition of the intermediate symbols may be dependent on both the DCT-based mode of operation and the entropy coding method (Huffman, arithmetic, etc.). As discussed above, the input to entropy encoder 106 may be a sequence of quantized DCT coefficients (DC and AC). In the intermediate symbol sequence, each nonzero AC coefficient may be represented in combination with its "runlength." The term "runlength" or "RUNLENGTH" as used herein may correspond to the consecutive number of zero-valued AC coefficients that precede the nonzero AC coefficient in the zigzag sequence.

Each runlength/nonzero-coefficient combination may be represented by a pair of symbols, for example, Symbol-1 (RUNLENGTH, SIZE) and Symbol-2 (AMPLITUDE). In this example, Symbol-1 may represent the two pieces of information, RUNLENGTH and SIZE. As mentioned above, RUNLENGTH may correspond to the number of consecutive zero- valued AC coefficients in the zigzag sequence preceding the nonzero AC coefficient being represented. The term "SIZE", as used herein, may correspond to the number of bits used to encode AMPLITUDE (i.e.,

Synibol-2). The term "AMPLITUDE" as used herein may correspond to the amplitude of the nonzero AC coefficient. Symbol-2 may represent a single piece of information designated AMPLITUDE.

Generally, the RUNLENGTH value may represent zero-runs of length 0 to 15. Actual zero-runs in the zig-zag sequence may be greater than 15, so the Symbol- 1 value

(15, 0) may be interpreted as the extension symbol with RUNLENGTH=IO. The special Symbol- 1 value (0,0) indicates an end of block (EOB) situation, and may be viewed as an "escape" symbol that terminates the 8x8 sample block.

Thus, for each 8x8 block of samples, the zig-zag sequence of 63 quantized AC coefficients may be represented as a sequence of Symbol- 1 and Symbol-2 symbol pairs. However, each "pair" may have repetitions of Symbol- 1 in the case of a long RUNLENGTH or only one Symbol-1 in the case of an EOB.

The possible range of quantized AC coefficients may determine the range of values for which both the AMPLITUDE and the SIZE information may correspond. For AC coefficients, one possible structure of the Symbol-1 and Symbol-2 intermediate representations is illustrated below in Tables 1 and 2, respectively. For DC coefficients the intermediate representation may be structured similarly. In this example, Symbol-1 may represent only SIZE information; Symbol-2 may represent AMPLITUDE information as before: Symbol-l--(SIZE), Symbol-2 (AMPLITUDE). Because the DC coefficient may be differentially encoded, it may be covered by twice as many integer values, [-2 11 , 2 n -l] as the AC coefficients, so one additional level may be added to the bottom of Table 2 for DC coefficients. Symbol-1 for DC coefficients may thus represent a value from 1 to 11.

Table 1. Baseline Huffman Coding Symbol-1 Structure

SfZE AMPLITUDE

1

2

3 -7..-4,4..7

4 -15.,-8,8.J 5

5 -31..-I 6J 6..31

6 «63..-32,32..63

7 -127..«64,64..127

S ~255..~128, 12S.,255

9 -51 1..-256,256,.5 I I

Id -1O23..-512,512..1O23

Table 2. Baseline Entropy Coding Symbol-2 Structure

Huffman coding is a method that may take symbols (e.g., bytes, DCT coefficients, etc.) and encode them with variable length codes that may be assigned according to statistical probabilities. For example, the Huffman coding scheme used in JPEG compression may reduce file size further by replacing the fixed-size (e.g. 12-bit) codes with variable-length codes (i.e., 1-16 bits). On average, the replacement codes may add up to fewer bits than the original. If 5 on average, 10% fewer bits are needed to store the quantized DCT coefficients, then this may directly translate into a 10% file size reduction. The difficult part may be deciding which codes to replace with which variable-length bit-strings. The Huffman Tables define these mappings, and are stored inside every JPEG image file. A separate Huffman table may be required for each of the following four components: luminance DC component, luminance AC components, chrominance DC component and chrominance AC components.

For example, the tables may include a table for luminance including AC coefficients (162 records, 162x20 = 3240 bits), a table for chrominance including AC coefficients (162 records, 162x20 = 3240 bits), a table for luminance including DC coefficients (12 records, 12x15 = 180 bits), and a table for chrominance including DC coefficients (12 records, 12x15 = 180 bits). This may result in a total of 6840 bits or 855 bytes for standard entropy coding as shown in Table 6. Standard run-length encoding (RLE) data compression may include several operations. For example, the following operations may be implemented while making an RLE-sequence:

1. Action-1: For each element of 63 AC coefficients it may be compared with zero. Thus, for each domain there may be 63 Action-1.

2. Action-2: If the current element of the sequence is 0, the counter of RUNLENGTH may be increased.

3. Action-3: If the number of RUNLENGTH is greater than 15 and it is not an EOB, then the counter of RUNLENGTH may be modified.

4. Action-4: If the current element is not zero, an address to the appropriate table may be created, the codeword may be obtained and written.

5. Action-5: Once the codeword has been obtained for the RUNLENGTH/SIZE combination, the AMPLITUDE value may be presented in binary form and written.

6. Action-6: Address calculation in Huffman tables (using values of RUNLENGTH and SIZE).

Table 3 shows the average complexity values per block for the operations listed above utilizing the YUV model, wherein Y corresponds to brightness (i.e., luminance) and UV corresponds to color (i.e., chrominance). For example, if average Action-3 Y complexity equals 0.1 there may be only one Action-3 Y per 10 blocks on average.

Table 3. Complexity of standard RLE Run-Length Encoding with short tables

Figure 3 depicts a system 300 for low complexity coding in accordance with an exemplary embodiment of the present disclosure. System 300 may be configured to receive raw video data and to convert that data from a Red, Green, Blue (RGB) color space to a luminence and chrominance (YUV) color space using converter 302.

Conversion from RGB to YUV may be performed using a variety of techniques, including, but not limited to the use of fast table drive algorithms. Each converted Y, U, and V frame may be grouped into 8x8 blocks and input into FDCT 304. Each 8x8 block of source image samples is effectively a 64-point discrete signal which may be a function of x and y. FDCT 304 may receive this signal as an input and decompose it into 64 orthogonal basis signals, Each may contain one of the 64 unique two- dimensional (2D) "spatial frequencies" which comprise the input signal's "spectrum." The output of FDCT 304 may be the set of 64 basis-signal amplitudes or "DCT coefficients" whose values may be uniquely determined by the particular 64-point input signal. FDCT 304 may provide this output to quantizer 306. It should be noted that any portion of system 300 may be included in, for example, an integrated circuit or similar device.

After output from FDCT 304, each of the 64 DCT coefficients may be uniformly quantized in conjunction with a quantization table (e.g., 64 element), which may be specified by the application (or user) as an input to the entropy coder 308. Each element may be any integer value from 1 to 255, which specifies the step size of the quantizer for its corresponding DCT coefficient. Quantization may be defined as division of each DCT coefficient by its corresponding quantizer step size, followed by rounding to the nearest integer. This output value may then be normalized by the quantizer step size. Following quantization, each element may be input to entropy coder 308.

Entropy coder 308 may include RLE coding module 310 and Huffman coding module 312. The standard run-length encoding described above with reference to Table 3 requires 4 Huffman tables. In contrast, in this embodiment, separate tables may be used for the RUNLENGTH and SIZE values for AC coefficients. In this embodiment, for the DC coefficients, the same tables may be used without changes. This may reduce the storage size required to maintain the Huffman tables. For example, in accordance with this embodiment, a table of codewords for all possible RUNLENGTH values may consist of 16 records (16*10 = 160 bits for the UV-components, 14*16 = 224 bits for the Y-component), and the table for SIZE may include 12 records (12*16 = 192 bits). This may result in a total of 576 bits, or 72 bytes. Thus, this approach may result in a significant conservation of memory resources as compared to existing entropy encoding

techniques, which requires 855 bytes.

Table 4 depicts a summary showing the reduced number of operations required when separate tables are used for RUNLENGTH AND SIZE for AC coefficients.

Table 4. Complexity of standard RLE with short tables

A measure of the complexity of this approach to RLE encoding may involve determining the number of operations required. The number of operations may be compared with those required for standard RLE as shown in Table 3 above. The primary difference appears in Action-4, as in this embodiment two different tables must be addressed, thus the number of these operations may double. Moreover, Action-6 is not required to count address in Huffman tables. A more detailed comparison between standard RLE and this embodiment may be formed by comparing Figures 7 and 8, which are described in further detail below.

Figure 4 depicts a diagram 400 showing the Rate-distortion curve (i.e., compression rate vs. quality of reconstructed image using PSNR metric) associated with standard JPEG entropy coding as compared with the entropy coding described in accordance with embodiments of the present disclosure. The approach described above in Figure 3 may simplify the encoding process, although the compression ratio may be reduced slightly (see Figure 4), because RUNLENGTH and SIZE may be independently encoded, without using the correlation between them. Figure 4 shows that the compression rate and corresponding quality level (calculated using the Rate-Distortion function) are decreasing very slightly. For example, for Rate = IMbps the decrease in quality level for the embodiments described herein is less than 0.IdB in comparison with standard JPEG using a PSNR metric.

Run-Length Encoding without tables for RUNLENGTH

In some embodiments, run-length encoding may be performed without using Huffman tables for RUNLENGTH. That is, for AC coefficients, only Huffman tables for SIZE may be used. For DC coefficients, the same tables described above may be used. Codewords for RUNLENGTH may not be required, instead these zeros may be written into the output stream. For example, if there are six zeroes and non-zero AC coefficients in sequence, the zeroes may be written into the stream followed by SIZE and AMPLITUDE.

The determination to utilize only Huffman tables for SIZE may become apparent after reviewing Figures 5 and 6, which depict the probability distribution of both RUNLENGTH and SIZE values respectively. As shown in Figure 5, the probability of a large zero sequence may be very small for different testing images (e.g. claire80.avi vs. soldiers_hi.avi). Here, the distribution function does not appear to change significantly for the different testing images. For example, the probability of 5 successive zeroes occurring during a sequence of AC coefficients may be less than 0.008 for the different testing images.

In contrast, Figure 6 depicts the probability distribution for SIZE values. Here, it may be noted that the distribution function for SIZE values may change for different movies and thus may be more complicated. For example, the probability at SIZE value 2 differs from approximately 0.45 for the claire80.avi to approximately 0.55 for the soldiers_hi.avi.

Table 5 depicts the complexity in terms of the number of operations required when performing run-length encoding without using Huffman tables for RUNLENGTH. The elimination of the RUNLENGTH tables may also eliminate the need for the RUNLENGTH counter. Thus, Action-2 and Action-3 (i.e., RUNLENGTH counter modifications) are not necessary. The Action-6 requirement may also be eliminated as well as there is no need to calculate address in the Huffman tables.

Table 5. Complexity of standard RLE with short tables

Therefore, by eliminating the Huffman tables for RUNLENGTH, the process of encoding may be simplified. As a result, the compression ratio may also decrease more as shown in Figure 4, because the RUNLENGTH values are not compressed.

Figures 7-9 depict a series of flowcharts comparing standard JPEG entropy coding with the embodiments of the present disclosure. For example, Figure 7 depicts a flowchart 700 including operations 701-715 used to perform standard entropy coding in accordance with the JPEG image compression algorithm. Flowchart 700 includes the operation of address calculation in the Huffman tables for RUNLENGTH and SIZE (i.e., Action-6). The standard entropy coding technique shown in Figure 7 may require 855 bytes of memory consumption.

In contrast, Figure 8 shows flowchart 800 including operations 801-814 configured to perform entropy coding using separate tables for RUNLENGTH and SIZE in accordance with the embodiments described herein. In this embodiment, as discussed above, the operation of address calculation in Huffman tables is removed (thus no Action-6 is required). The embodiment shown in Figure 8 may require far less memory (i.e., 72 bytes) than standard entropy coding. Figure 9 shows a flowchart 900 including operations 901-911 configured to perform entropy coding in accordance with the embodiments described herein. Flowchart 900 depicts an embodiment of performing entropy coding without using tables for RUNLENGTH. In this embodiment, the operations related to the calculation of RUNLENGTH may be removed. For example, operations 705 (i.e., Action-2 corresponding to increasing Run counter by 1), 707, 709 and 710 of Figure 7 are removed. Flowchart 900 also depicts the writing of zero coefficients into the stream, as shown in operation 905. The embodiment shown in Figure 9 may require less memory (i.e. 24 bytes) than standard entropy encoding as well.

A summary showing standard entropy coding as well as the embodiments

described herein is provided below in Table 6. Table 6 shows the variations in memory consumption between (1) existing entropy encoding techniques, (2) entropy coding using short RL tables, and (3) entropy coding without using tables for RUNLENGTH. As shown below, the embodiments described herein provide a significant amount of memory conservation as compared to standard entropy coding.

Table 6. Memory Consumption of the three techniques

In accordance with one exemplary embodiment of the present disclosure a flowchart 1000 of operations for data compression is provided in Figure 10. The method may include converting data from a first color space to a second color space to generate converted data (1002). The method may further include receiving said converted data at a discrete cosine transform (DCT), said DCT being configured to generate at least one DCT coefficient (1004). The method may additionally include receiving said at least one DCT coefficient from said discrete cosine transform at a quantizer, said quantizer being configured to quantize said at least one DCT coefficient and to provide at least one quantized coefficient to an entropy encoder, said entropy encoder including a first Huffman table corresponding to a number of bits used to encode an amplitude of a non-zero AC coefficient (1006). Of course, additional operations are also within the scope of the present disclosure.

The embodiments described herein may be implemented, for example, in an integrated circuit (IC) which may include, for example, a System-on-a-Chip (SoC), an application specific integrated circuit (ASIC) and/or a field programmable gate array (FPGA). "Integrated circuit", as used in any embodiment herein, means a semiconductor device and/or microelectronic device, such as, for example, but not limited to, a semiconductor integrated circuit chip.

As used in any embodiment described herein, "circuitry" may comprise, for example, singly or in any combination, hardwired circuitry, programmable circuitry, state machine circuitry, and/or firmware that stores instructions executed by programmable circuitry. It should be understood at the outset that any of the operations and/or operative components described in any embodiment herein may be implemented in software, firmware, hardwired circuitry and/or any combination thereof.

Embodiments of the methods described above may be implemented in a computer program that may be stored on a storage medium having instructions to program a system (e.g., a machine) to perform the methods. The storage medium may include, but is not limited to, any type of disk including floppy disks, optical disks, compact disk read-only memories (CD-ROMs), compact disk rewritables (CD-RWs), and magneto-optical disks, semiconductor devices such as read-only memories (ROMs), random access memories (RAMs) such as dynamic and static RAMs, erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), flash memories, magnetic or optical cards, or any type of media suitable for storing electronic instructions. Other embodiments may be implemented as software modules executed by a programmable control device.

The present disclosure may provide numerous advantages over the prior art. For example, the embodiments described herein describe a number of low complexity entropy coding schemes that may decrease the computational complexity of VLC-based entropy coder by approximately 50% and memory consumption by approximately 70% in comparison with standard JPEG. The embodiments described herein allow for the selection of an efficient tradeoff between computational complexity and compression ratio.

For example, the embodiments described herein may not require the coding of zero sequences between non-zero coefficients (RUNLENGTH). In some embodiments,

Huffman tables for RUNLENGTH values are also not needed, thus reducing memory consumption (see Table 6). It was shown that probability of such sequences may be very small for different testing movies (see Figure 5) and the effect on compression ratio is quite small (see Figure 4). Various features, aspects, and embodiments have been described herein. The features, aspects, and embodiments are susceptible to combination with one another as well as to variation and modification, as will be understood by those having skill in the art. The present disclosure should, therefore, be considered to encompass such combinations, variations, and modifications.