Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR IMAGE PROCESSING
Document Type and Number:
WIPO Patent Application WO/2006/131866
Kind Code:
A3
Abstract:
A method and system for image processing based on image texture information provides steps for detecting a texture component based on image gradients of an image that includes (i) computing (106) spatial image gradients of the image; (ii) determining (108) a value for a weighted image gradient per pixel within an image block representing an average energy of the image gradients; (iii) computing (110) an average value and a variance value per image block; and (iv) processing (112) the image for quality improvement. A processing step follows with a given threshold. Thereafter, if the pixel within the image block has an image gradient average greater than the threshold, the image is classified as including a "texture" image area and the pixel within the image block is enhanced. However, if the image gradient average is smaller than the threshold, the image is classified as including a "smooth" image area and enhanced by smoothing the pixel within the image block.

Inventors:
JASINSCHI RADU SERBAN (FR)
Application Number:
PCT/IB2006/051772
Publication Date:
March 29, 2007
Filing Date:
June 02, 2006
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKL PHILIPS ELECTRONICS NV (NL)
JASINSCHI RADU SERBAN (FR)
International Classes:
G06T5/00; G06T7/40
Foreign References:
EP1017239A22000-07-05
GB2323994A1998-10-07
EP0797349A21997-09-24
US20030081854A12003-05-01
US20030053711A12003-03-20
Other References:
JIWU H ET AL: "POSTFILTERING OF BLOCK EFFECTS BY EDGE MAP ANALYSIS", PROCEEDINGS OF THE IASTED/ISMM INTERNATIONAL CONFERENCE, 11 November 1996 (1996-11-11), pages 233 - 236, XP000770120
Attorney, Agent or Firm:
CHAFFRAIX, Jean (156 Boulevard Haussmann, Paris, FR)
Download PDF:
Claims:

CLAIMS :

1. A method (100) of image processing, wherein the method comprises the steps of :

- detecting (104) a texture component of an image based on image gradients of the image, said detecting step itself comprising the sub-steps of :

- computing (106) spatial image gradients of the image;

- determining (108) a value for a weighted image gradient per pixel within an image block representing an average energy of the image gradients; and

- computing (110) an average value and a variance value per image block;

- processing (112) the image, the processing step itself comprising the sub-steps of : - setting a threshold;

- determining (114) whether the pixel within the image block has an image gradient average greater than the threshold, and if so, then classifying (116) the image as including a "texture" image area and enhancing the pixel within the image block; and

- if the image gradient average is smaller than the threshold, classifying (118) the image as including a "smooth" image area and enhancing it by smoothing the pixel within the image block.

2. The method according to claim 1, wherein the detecting step is based on a collection of directional image gradients and wherein the computing sub-step of the spatial image gradients of the image uses at least four directional masks for the pixel within the image block in accordance with: i NS ( χ >y) = M Ns * I ( χ >y) i E w( χ >y) = M Ew * I ( χ >y) 1 NWSE ( χ ,y) = M NWSE * J ( χ ,y) I msw (x,y) = M msw * I(x,y) wherein I NS (x,y) represents the spatial image gradient in the North- South direction and M NS represents the mask in the North- South direction,

I EW {x,y) represents the spatial image gradient in the East-West direction and M EW represents the mask in the East- West direction, I NWSE {x,y) represents the spatial image gradient in the Northwest- Southeast direction and M N ^ 51 . represents the mask in the Northwest-Southeast direction, I msw {x, y) represents the spatial image gradient in the Northeast- Southwest direction and M NESW represents the mask in the Northeast- Southwest direction.

3. The method according to claim 1 or 2, wherein the sub-step of determining the value for a weighted image gradient per pixel within the image block comprises computing a value of P{x,y) representing a normalized square root of the image gradient energy in accordance with:

wherein l m ≡ l m {x,y), I EW ≡ I EW (x,y), / Wffi ≡ / Wffi (*,J>) and

-* NESW = -* NESW \ X > y) -

4. The method according to claim 2 or 3, wherein the sub-step of computing an average value and a variance value per image block comprises computing first and second order statistics, respectively, given the local weighted image gradient P{x,y) per pixel, by computing its average for each image block, (NxN), and its variance within the (NxN) block, in accordance with:

(P) = ∑P(x,,y, )/(NxN)

AP = J{p{x,y)-(P))x{p{x,y)-(P))/{NxN) wherein (i 3 ) is the average for each (NxN) block of P(x,y), and AP is the variance within the (Nx N) block per pixel of P(x,y).

5. The method according to claim 4, wherein N is equal to or greater than 2, so that the (NxN) block comprises at least 2x2 blocks.

6. The method according to claim 4 or 5, wherein the sub-sep of classifying the image as including a texture image area comprises scaling an image intensity gain in accordance with:

G{x,y) = (P) + z and G(x,y) = AP + z wherein G(x,y) is the gain for the average and variance values per pixel within the image block and z can be less, greater or equal to 10.0.

7. The method according to any one of claims 2 to 6, wherein classifying the image as including a smooth image area comprises using a mask in accordance with:

1 2 1

-^* smooth - ~ J , r _ X 2 4 2 l O

1 2 1 wherein M smooth is an edge mask equation for smoothing the pixels within the image block.

8. The method according to any one of the preceding claims, wherein upon processing of the image, it further comprises objectively measuring the visual quality improvement by means of a Blockiness Edge Impairment Metric (BIM).

9. The method according to any one of the preceding claims, wherein if the image gradient average is greater than the threshold, it further comprises the sub-step of enhancing the pixel within the image block using peaking-LTI (Luminance Transition Improvement) algorithms.

10. The method according to any one of the preceding claims, wherein smoothing the pixel within the image block comprises applying a Gaussian smoothing filtering process.

11. The method according to any one of preceding claims, wherein the input image comprises video images generated by a transmitter in a TV, DVD or DVD Read and/or Write devices.

12. The method according to any one of the preceding claims, wherein the method further comprises performing the detecting step and the image processing step after the video image has been decoded.

13. The method according to any one of the preceding claims, wherein the method further comprises performing the detecting step and the image processing step before the video image has been encoded.

14. The method according to any one of the preceding claims, wherein the method iurther comprises performing the detecting step and the image processing step between encoding and decoding steps of the video image.

15. The method according to any one of the preceding claims, wherein the method further comprises outputting a processed image on a consumer device screen or storing the resulting image on a local device storage.

16. The method according to any one of claims 4 to 15, wherein the step of detecting (104) a texture component based on image gradients of the image comprises computing a third order statistics and above.

17. The method according to any one of claims 2 to 15, wherein the computing sub-step of the spatial image gradients of the image further comprises processing a log of the spatial image gradients, (log (I(x,y)), and utilizing said log to determine the value for a weighted image gradient per pixel within the image block to compute the value of P(x,y).

18. A system (10) of image processing having a video decoder (14) adapted to receive a video signal (12), an image processing module (16) and a display driver (22), wherein the processing module comprises:

- a texture detection module (24) configured to detect a texture component based on image gradients of an image by computing spatial image gradients of the image, by determining a value for a weighted image gradient per pixel within an image block representing an average energy of the image gradients, and by computing an average value and a variance value per image block; and

- an improvement module (26) configured to process the image by setting a threshold, determining whether the pixel within the image block has an image gradient average greater than the threshold, and if so, then classifying the image as including a "texture" image area and enhancing the pixel within the image block, and if the image gradient average is smaller than the threshold, classifying the image as including a "smooth" image area and enhancing it by smoothing the pixel within the image block

19. The system of claim 18, wherein the texture detection module (24) is further configured to compute first and second order statistics, given the local weighted image gradient P{x,y) per pixel during the computation of the average energy value and the variance value per image block, by computing its average for each image block, (NxN), and its variance within the (NxN) block, in accordance with:

(P) = ∑P(x,,y,)/(NxN)

AP = J{p{x,y)-(P))x{p{x,y)-(P))/{NxN) wherein (i 3 ) is the average for each (NxN) block of P(x,y), and AP is the variance within the (Nx N) block per pixel of P(x,y).

20. The system according to claim 17 or 18, wherein the improvement module (26) is further configured to scale an image intensity gain in accordance with:

G{x,y) = (P) + z and

G(x,y) = AP + z

wherein G(X,^) is the gain for the average and variance values per pixel within the image block and z can be less, greater or equal to 10.0.

21. The system according to any one of claims 18 to 20, wherein the improvement module (26) is further configured to use a mask in accordance with:

M smooth wherein M smooth is an edge mask equation for smoothing the pixels within the image block.

22. An article comprising a computer program product having a sequence of instructions stored on a computer readable medium, which when executed by a processor, cause the processor to:

- detect a texture component based on image gradients of an image by:

- computing spatial image gradients of the image; - determining a value for a weighted image gradient per pixel within an image block representing an average energy of the image gradients;

- computing an average value and a variance value per image block; and

- process the image by:

- setting a threshold; - determining whether the pixel within the image block has an image gradient average greater than the threshold, and if so, classifying the image as including a "texture" image area and enhancing the pixel within the image block; and

- if the image gradient average is smaller than the threshold, classifying the image as including a "smooth" image area and enhancing it by smoothing the pixel within the image block.

Description:

"METHOD AND SYSTEM FOR IMAGE PROCESSING"

FIELD OF THE INVENTION

The present invention relates to visual quality improvement in image processing, and in particular, the invention relates to a method of image processing for improving visual quality of video images based on image texture information, and to a corresponding system for carrying out said image processing method.

BACKGROUND OF THE INVENTION

Advances in technology have made it possible to acquire greater and greater speed and handle increasingly larger amounts of data and information in visual technologies. In fact, to handle the large amounts of information resulting from the advances in multimedia tendency in communication and visual media, data compression is imperative. For example, in general, the amount of data associated with visual information is so large that its handling, transmission, and processing would require enormous storage capacity. Although the capacities of utilizing several storage media are substantial, the access speeds are usually inversely proportional to the capacity. Storage and transmission of such data require large capacity and bandwidth. To eliminate the need for large storage capacity, an image data compression technique is used that reduces the number of bits required to store or transmit image without any appreciable loss of data. Typically, information compression techniques include an image coding method or picture encoding standards proposed by a standards body such as the Motion Picture Expert Group (MPEG) of the International Standardization Organization (ISO). Generally, techniques based on the MPEG standard adopt block-based motion estimate and discrete cosine transform (DCT) blocks. In the MPEG coding techniques, the DCT is used as a basic principle of information compression, where image data is coded and the coded bit stream thus obtained is supplied to storage or communication media, thereby reducing the transfer rate of data, the bandwidth of the communication media, the storage space of the storage media, and so forth. Most picture encoding standards utilize the DCT in 8 x 8 pixel block units to pack information with a small number of transform coefficients. This block-based DCT scheme is based on the local spatial correlation properties of an image.

Therefore, techniques such as described above remove unnecessary or redundant data as well as data which will not dramatically affect the quality of the reproduced video and/or audio (it must be understood that the term "quality" can vary in accordance with personal desires or specified requirements). In other words, an image data compression removes redundancies contained in the image signals. The redundancies may include a spectral redundancy among colors, a temporal redundancy between successive image screens, a spatial redundancy between adjacent pixels within the image screen, and a statistical redundancy.

For example, the DCT, as a typical method of image coding for removing the spatial redundancy, divides original input images into small size blocks and processes them individually. In a transmitter, each of the blocks of an original image is converted by the DCT and transform coefficients are generated. The DCT also has a tendency of concentrating frequency characteristics irregularly distributed on the field into the low frequency region. Accordingly, the MPEG coding method performs an operation called "quantization", in which the high frequency region is ignored after DCT. Thus, the method is capable of reducing a loss of information to compress an image efficiently. The transform coefficients are then quantized and transmitted to the receiver. In a receiver, the transform coefficients are inversely quantized and converted so that each of the blocks of the original image is regenerated. Further, in the MPEG coding method, the DCT is performed in a square block unit including a certain size of pixels, i.e., 8 x 8 pixels or 16 x 16 pixels, for one picture field. This DCT processing scheme acts as a factor that forces pixels in the boundaries of the square block to have discontinuous values in combination with the above-mentioned DCT characteristic of concentrating information into the low frequency region. That is, in the MPEG decoding method, artifacts are generated, i.e., a discontinuity of an image called "blocking effect" that makes a significant difference between values of pixels in the boundaries of a certain square block and those in the adjacent square blocks.

Image data compressed in accordance with the MPEG coding method adopting the DCT is conventionally decoded by means of a digital image decoding apparatus, that may include a decoder such as a bit stream decoder, a memory and a display. As an example, a digital image decoding apparatus can be any conventional digital motion picture coder/decoder, which is widely used in image processing system

such as a High Definition Television (HDTV). The bit stream decoder then performs the inverted quantization operation in order to derive high frequency components ignored upon coding and then performs the inverted discrete cosine transform (IDCT) of the inverted quantized data, thereby decoding the image data. The image data reconstructed by the bit stream decoder passes through the memory and is displayed on the display.

However, as mentioned previously, when the image data is restored, considerable image deterioration, for example, blocking artifacts near the block boundary, corner outliers at the cross points of blocks, and ringing noise near the image edges, occurs. These artifacts further include motion related (visual object occlusion) halo. Such deterioration is serious when the image is highly compressed.

In the decoded digital images as in the above, blocking effect occurs near to the discontinuous boundaries between the blocks. The occurrence of this blocking effect is generated during the transform coding process of the divided blocks of digital images. Moreover, when the quantization step size is increased during quantization, the quantization error increases and the blocking effect in the discontinuous boundary between blocks becomes even more apparent.

Image deterioration such as blocking artifacts can also be caused by grid noise generated along the block boundary in a relatively homogeneous area. Further, this discontinuity of picture, that is, the blocking effect, deteriorates the visual characteristics of an image for an observer and gives rise to undesirable artifacts in the boundaries of square blocks, thereby causing the observer to strain his eyes. For these foregoing reasons, the digital image decoding apparatus requires suppression or reduction of the discontinuity of images or the blocking effects.

Even with the most advanced compression techniques presently available, noise is introduced into the decompressed signal. This noise or picture degradation, in part, appears as fuzziness along the edges of the moving objects in the picture. Therefore, in order to improve the quality of the picture and/or sound, it is necessary to filter the decompressed signals by identifying and eliminating or minimizing the amount of noise causing the picture and/or sound degradation. This has presented a continuing challenge for engineers and designers.

In order to reduce these problems associated with blocking artifacts and image quality degradation and to enhance the quality of displayed images, a number of methods have been suggested.

For example, US 5,852,475 to Gupta describes a post-processing unit that provides a digital noise reduction unit and an artefact reduction unit. The post-processor works on a current frame of pixel data using information from the immediately preceding post-processed frame stored in a frame memory. In particular, in Gupta, the postprocessor first identifies texture and fine detail areas in a decoded image and uses artifacts reduction only on portions of the image that are not part of an edge, and are not part of a texture or fine detail area. Since artiiact reduction is not utilized on these areas by the post-processor, the post processed image is not softened in regions where it is easily noticed by the human eye. Thus, using an edge map for identifying a pixel as one of an edge pixel and a non-edge pixel, the artifact reduction unit of Gupta performs a spatially- variant filtering using only information in an edge map. However, the above example employing the conventional techniques described are not often satisfactory and are somewhat incomplete in providing higher quality visuals in images. For instance, in providing a digital noise reduction unit and an artifact reduction unit in a post-processor, Gupta mostly confines to the use of local edge information to reduce ringing noise on a block of pixels and then filter the block of pixels using a spatially- variant filter. The filtering is done by generating an edge map and processing and classifying pixels according to a number of classifications such as edge, border, or shade classifications. As a result, the above methods require complicated pixel processing steps that may not ensure effective detection of image regions with high/low texture or brightness variation, and a uniform and robust improvement in visual quality in images.

Therefore, it is desirable to develop a method and system based on image texture information and there is a need to quantify the influence of texture information for visual quality, which avoids the above mentioned problems and can offer more robustness, versatility, and be less costly and simple to implement.

SUMMARY OF THE INVENTION

Accordingly, it is an object of the invention to provide a method of image processing that includes detecting a texture component based on image gradients of an

image. The detecting step involves (i) computing spatial image gradients of the image; (ii) determining a value for a weighted image gradient per pixel within an image block representing an average energy of the image gradients; (iii) computing an average value and a variance value per image block; and (iv) processing the image. The processing step includes setting a given threshold; determining whether the pixel within the image block has an image gradient average greater than the threshold, and if so, then classifying the image as including a "texture" image area and enhancing the pixel within the image block using peaking-LTI (Luminance Transition Improvement) algorithms; and if the image gradient average is smaller than the threshold, classifying the image as including a "smooth" image area and enhancing it by smoothing the pixel within the image block.

One or more of the following features may also be included.

Advantageously, the method as described above, by processing the image, improves the visual image quality and reduces noise and correct visual artifacts.

In one aspect, the detecting step is based on a collection of directional image gradients and the computing step of the spatial image gradients of the image uses at least four directional masks for the pixel within the image block in accordance with: i N s ( χ >y) = M Ns * I ( χ >y) 1 NWSE ( χ ,y) = M NWSE * 1 ^y) 1 NESw ( χ >y) = M NESW * 1 ^y) where / {x,y) represents the spatial image gradient in at least four directions and M represents the masks in the four directions.

In yet another aspect, determining the value for a weighted image gradient per pixel within the image block includes computing a value of P{x,y) representing a normalized square root of the image gradient energy in accordance with : p( γ ,λ _ * y, JJ ZJ VJ ZJ VJ ZJ VJ ZJ r l A '/ i " /T " * ^ 5 ^ 5 EW EW τ 1 NWSE ^ 1 NWSE ^ 1 NESW ^ 1 NESW

where I m ≡ I NS {x,y), I EW ≡ I EW {x,y), / Wffi ≡ / Wffi (*,j>) and

-* NESW = -* NESW \X' y '

Further, computing an average value and a variance value per image block includes computing first and second order statistics, respectively, given the local weighted

image gradient P{x,y) per pixel, by computing the average for each image block, (NxN), and the variance within the (NxN) block, in accordance with:

(P) = ∑P(x,,y, )/(NxN)

AP = J{p{x,y)-(P))x{p{x,y)-(P))/{NxN) where (P) is the average for each (NxN) block of P(x,y), and AP is the variance within the (Nx N) block per pixel of P(x,y).

As another aspect, the method also includes performing the detecting step and the processing step of the image for quality improvement after the video image has been decoded. Moreover, the detecting step and the processing of the image can be performed before the video image has been encoded, or between the encoding and decoding steps of the video image.

Additionally, the invention also provides a system for image processing configured to reduce noise and correct visual image artifacts having a video decoder adapted to receive a video input, an image processing module and a display driver. The processing module includes a texture detection module configured to detect a texture component based on image gradients of an image by computing spatial image gradients of the image, by determining a value for a weighted image gradient per pixel within an image block representing an average energy of the image gradients, and by computing an average value and a variance value per image block. The processing module also includes an improvement module configured to process the image for quality improvement, by setting a threshold, determining whether the pixel within the image block has an image gradient average greater than the threshold, and if so, classifying the image as including a "texture" image area and enhancing the pixel within the image block. This enhancement may be performed using peaking-LTI (Luminance Transition Improvement) algorithms. And if the image gradient average is smaller than the threshold, classifying the image as including a "smooth" image area and enhancing it by smoothing the pixel within the image block.

Other features of the method and device are further recited in the dependent claims.

These and other aspects of the invention will be apparent from and elucidated with reference to the embodiments described in the following description, drawings and from the claims.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a general block diagram of a visual image quality improvement system in accordance with one embodiment of the present invention;

FIG. 2 is another block diagram of the visual image quality improvement system of FIG. 1;

FIG. 3 is a flowchart of an exemplary method implemented by the visual image quality improvement systems of FIGs. 1 and 2; and

FIG. 4 shows the four directions used for computing image gradients.

DESCRIPTION OF THE PREFERED EMBODIMENTS

FIG. 1 shows a block diagram of a visual image quality improvement system 10, which can be seen as implemented in a generic display system block diagram. The system 10 can be a video receiver component of any number of different electronic devices such as HDTV midstream and high end TVs as well as DVD+RW players, or the like. In particular, in the system 10, a video signal 12 is the input of a video decoder 14. Although not illustrated, an A/D converter would be used if the video signal 12 consists of analogue video signals or RGB video. Optionally, if mixed signals are received, such as from a PCI or Ethernet connection, there might be an optional digital decode module. Subsequently, converted digital RGB signals or video image signals proceed to drive into an image processing module 16, that is itself connected to an optional frame buffer (RAM) 18 and a processor or microprocessor 20. Finally, the processing module 16 is linked to a technology specific display driver 22. FIG. 2 is another block diagram of the system 10, in which the image processing module 16 is shown as including a texture detection module 24 and a visual quality improvement or improvement module 26. As previously mentioned, the visual quality of video images and the like depends fundamentally on two factors: (i) spatial, and (ii) temporal information. The spatial information includes color, edges, and texture, while the temporal information image includes velocity and models. In combining these two iactors, the aim is to improve the quality and perception of spatial details as well as

their consistent moving in time. Consequently, with the increased use of digital video in consumer devices, new types of visual artifacts are generated, from upconversion, de- interlacing artifacts, e.g., flickering, in the temporal domain to blockiness and ringing noise in the spatial domain. For example, in the system 10, the focus is on spatial information, in particular, the content information provided by texture. For example, with the increase in spatial resolution, from CRT to LCD to high definition (HD) TV and temporal frequency, from 50, to 75 to 100 and upper values Hz, more and more visual artifacts become visible. Therefore, for the system 10, identifying image regions according to texture is an important feature to reduce the presence of some of these artifacts.

FIG. 3 is a flowchart illustrating an example of implementation of a method 100 for visual image quality improvement, which is based on image texture information. The method 100 for the visual quality improvement based on image texture information includes two parts: (i) texture detection; and (ii) post-processing of image areas based on statistical texture properties.

In particular, the method 100 begins with driving (102) of the video input into a spatial texture detection step 104 that includes a number of distinct sub-steps. The detection step 104 includes the computation of spatial image gradients of the image (sub- step 106), followed by determining the average energy of the image gradients (sub-step 108), and computing an average value and a variance value per image block (sub-step 110). In other words, the spatial texture detection step 104 is based on the use of a collection of directional image gradients in different directions: vertical, horizontal, and two diagonal directions (45 ° and 135 ° ). Gradients along four different directions: (i) north-south (NS); (ii) east-west (EW); (iii) northwest-southeast (NWSE), and (iv) northeast-southwest (NESW), as shown in FIG. 4, are used.

Further, the spatial derivatives use the following masks along these four directions:

1 0 - 1

M E rπWr = 1 0 - 1 1 0 - 1

1 1 0

M NWSE 1 0 -1 0 -1 -1

0 1 1

M NESW - 1 0 1 - 1 - 1 0

Using these four masks, the spatial image gradients of the image can be computed (sub-step 106):

1 NWSE (x,y) = M NWSE *l(x,y)

1 NESW (x,y) = M NESW *l(x,y)

Then, the sub-step 108 of determining the average energy of the image gradients is computed. These pixel-based image gradients are squared, summed up over all directions (divided by 4), normalized, and its square root is taken. Thus,

p( γ ,λ _ * y, JJ ZJ VJ ZJ VJ ZJ VJ ZJ r l A '/ i " /T " * ^ 5 ^ 5 EW EW τ 1 NWSE ^ 1 NWSE ^ 1 NESW ^ 1 NESW »

where I m ≡ I m {x,y), and so forth, and P{x,y) represents the average image gradient per pixel. Indeed, P{x,y) represents the normalized square root of the image gradient energy. Given the weighted image gradient P(x,y) per image pixel, a first and second order statistics per a given square block can be thus computed. The average computation is the first order statistics computation and the variance computation is the computation of the second order statistics. However, using other types of computations is also possible, such as computation of third order statistics and above.

Therefore, with these gradients, the average and variance per image block (e.g., 2 by 2) can be computed per image block in the sub-step 110 of the step 104. This is realized in accordance with the following in computing the average for each NxN block:

(P) = ∑P(x,,y, )/(NxN)

and according with the following in computing the variance within this NxN block:

AP = J{p{x,y)-(P))x{p{x,y)-(P))/{NxN)

Referring still to FIG. 3, after the sub-step 110 of computing the average value and the variance value per image block is followed by a step 112 of processing the image for visual quality improvement follows. This is realized by using information about the average (variance) per image block computed in the spatial texture detection step 104. If a pixel within an image block has an image gradient average (variance -

IGV) (IGA) that is larger then a given threshold T, as in a sub-step 114, then the pixel is classified as 'texture' and the pixel is enhanced by applying a peaking-LTI (Luminance Transition Improvement) algorithm (sub- step 116). On the other hand, if the IGA is less than the threshold T in the sub-step 114, the pixel of the image is classified as a 'smooth' area and it is smoothed out by applying a traditional Gaussian smoothing algorithm in a sub- step 118.

Therefore, the visual quality improvement method 100 based on spatial texture information depends upon two operations: (i) for an image block for which the average or variance of the image gradient is larger then a threshold (sub-step 114), a peaking-LTI operation on all the pixels inside the block is performed (sub-step 116); and (ii) if the average or variance of the image gradient is smaller than the threshold, then all the pixels in the image block are smoothed.

Specifically, for the sub-step 116, the image intensity gain (G) is scaled in accordance with the following:

G{x,y) = {P) + z G(x,y) = AP + z

for the average and variance values per pixel within the image block, respectively, where the value of z can be 10.00 or any other suitable value. Moreover, the smoothing in the sub-step 118 is realized, for example, with the mask:

1 2 1

M smooth - ~ _ iL -, X 2 4 2 l O

1 2 1

For example, for an image, implementing the method 100 for visual image quality improvement in the system 10 by performing the statistical calculations as described in the sub-steps 108 and 110, the variance of the image gradient can capture image regions with texture patches. Typically, image regions having texture patches can represent things such as grass, tree branches, a person's contours and clothing, water waves, and the like. If the computation of P(x,y) were to be applied to any image on a picture, for example, that is, the local weighted image gradient, this would be able to detect the boundaries of objects or persons captured in the image as well as the texture patches, e.g., marked in an intense red color, for analytical purposes.

When the sub-steps 108 and 110 are carried out, for a 8 x 8 pixel image block, for example, the particular image treated would display useful characteristics on the texture information of the image. First, using the statistical computation of the variance of the image gradient, the image regions with the texture patches would be captured (e.g., some areas will be shown in a marking dark color such as red as being relevant for texture and other areas will not). Thus, the "intensity" as represented by a red color is approximately constant for the entire region because when applying variance, the variations of the pixel in the image block with regards to an average is utilized.

Secondly, on the other hand, the average of the image gradient would detect a gradation in texture values, and the magnitude of a dark marking color such as red, i.e., its "intensity" would vary considerably. This results from the fact that the average of the image gradient is proportional to the texture energy or power spectrum because, as

described above, P{x,y) is a local weighted energy of the directional gradients or texture information. Furthermore, higher order statistics can be computed in the sub-steps 108 and 110 in computing the average value and the variance value per image block in order to generate higher quality visual improvement. In the method 100, once the spatial texture detection step 104 and the step 112 of processing the image for visual quality improvement have been performed, the reduction in image blockiness, the increase in image contrast, and the improvement in the visual quality of images can be readily observed. In fact, when closely observed and compared (for example, in a 2x 2 image block structure), an image modified and improved by applying the computation of the average as a texture detection metric displays markedly superior contrast, reduction of blockiness, and improvement in the local details with respect to texture appearance. Texture and boundaries are visibly more accurate and cleanly displayed and the image details such as a blue sky looks smoother, less blocky, with less artifacts than in an unprocessed image and with the details in the processed image being sharper than in the unprocessed image.

The description of the improvement in the visual quality can be either based on human subjective visual examination and inspection, or can be measured in an objective, quantitative manner. After processing the image for quality improvement as illustrated in FIG. 3, the visual quality improvement of an image can be objectively measured using a BIM technique (Blockiness Edge Impairment Metric). The BIM technique measures the degree of blockiness that occur in images due to digital encoding. For example, using a sequence of image, the BIM values for horizontal and vertical directions were compared for the unprocessed and processed sequence of images, as follows:

The BIM values above for the unprocessed image and the processed image were calculated separately. The closer each BIM value is to 1.0, the less blockiness the visual image displays. For example, as shown in the above table, a quantitative reduction in blockiness of approximately 12% is not uncommon in a processed image. In other words, such BIM values can be applied to different types of video sequences with similar visual quality improvement results.

In other embodiments, the computing step of the spatial image gradients of the image can also include the processing of a log of the spatial image gradients, log (I (x,y)) instead of I (x, y) and utilizing the log (I(x,y)) to determine the value for a weighted image gradient per pixel within the image block to compute the value of P(x,y), the local weighted image gradient. This has the advantage of compressing the dynamic range and thus getting an improved texture discrimination.

Although in the method 100 shown in FIG. 3, the video input step 102, which can be processed in a TV or DVD+RW player, drives into a spatial texture detection step 104 once it has been decoded, it can also happen that the detection step 104 and the processing step 112 are performed before the video image has been encoded by an encoder, such as that in a DVD+RW player or high-end TVs. Equally possible is that these steps 104 and 112 occur between the encoding and the decoding steps of the video image. Further, the resulting, higher quality video may be either sent to a local storage, for example, of a DVD+RW player, such as RAM 18 of the system 10 of FIG. 1, or sent to the display driver 22 for display on a consumer device such as a TV screen.

Moreover, the invention may be incorporated and implemented in the processing of video images to improve the visual quality of images in several fields of applications such as telecommunication devices like mobile telephones, PDAs, video conferencing systems, video on 3 G mobiles, security cameras and in various types of consumer electronic devices and electronic equipment, but also can be applied on systems providing two-dimensional still images or sequences of still images as well as three- dimensional sequences of images. It can be noted that there are numerous ways of implementing functions by means of items of hardware or software, or both. In this respect, the drawings are very diagrammatic and represent only one possible embodiment of the invention. Thus,

although a drawing shows different functions as different blocks, this by no means excludes that a single item of hardware or software carries out several functions. Nor does it exclude that an assembly of items of hardware or software or both carry out a iunction. The remarks made herein before demonstrate that the detailed description, with reference to the drawings, illustrates rather than limits the invention. There are numerous alternatives, which fall within the scope of the appended claims. Any reference sign in a claim should not be construed as limiting the claim. The word "comprise" or "comprising", does not exclude the presence of other elements or steps than those listed in a claim. The word "a" or "an" preceding an element or step does not exclude the presence of a plurality of such elements or steps.