Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TEMPORAL LUMINANCE VARIATION DETECTION AND CORRECTION FOR HIERARCHICAL LEVEL FRAME RATE CONVERTER
Document Type and Number:
WIPO Patent Application WO/2013/074294
Kind Code:
A1
Abstract:
Systems and methods for the reduction of motion compensation artifacts in a standard or high resolution image interpolation, more specifically to temporal luminance variation, are described. In one innovative aspect, a method of correcting temporal luminance variation (TLV) artifacts during frame rate conversion is provided. The method includes detecting TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation information. The method further includes determining the location of TLV artifacts in an interpolated image between the first image and the second image. The method also includes modifying the interpolated image based on the determination.

Inventors:
LE DINH CHON-TAM (US)
LE DINH KHA (US)
LE DINH PHUC-TUE (US)
HANSEN DAVID R (US)
Application Number:
PCT/US2012/062800
Publication Date:
May 23, 2013
Filing Date:
October 31, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
QUALCOMM INC (US)
International Classes:
H04N7/01
Foreign References:
US20100253835A12010-10-07
US20040057517A12004-03-25
US33896008A2008-12-18
Other References:
ROBERT P ET AL: "Advanced high definition 50 to 60 Hz standard conversion", BROADCASTING CONVENTION, 1988. IBC 1988., INTERNATIONAL BRIGHTON, UK, LONDON, UK,IEE, UK, 1 January 1988 (1988-01-01), pages 21 - 24, XP006518347, ISBN: 978-0-85296-368-5
Attorney, Agent or Firm:
NELSON, Eric M. (2040 Main Street Fourteenth Floo, Irvine California, US)
Download PDF:
Claims:
CLAIMS

WHAT IS CLAIMED IS:

1. A method of correcting temporal luminance variation (TLV) artifacts during frame rate conversion, comprising:

detecting TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation information;

determining the location of TLV artifacts in an interpolated image between the first image and the second image; and

modifying the interpolated image based on the determination.

2. The method of Claim 1, wherein detecting TLV includes:

receiving a first lower resolution image and a second lower resolution image corresponding to a first image and a second image;

extracting the edge information, wherein extracting the edge information includes extracting edges from the first lower resolution image;

extracting TLV characteristics from the first lower resolution image and the second lower resolution image;

detecting imaged based TLV based on the extracted edges, the identified edges, and TLV characteristics; and

detecting at least one of local TLV and global TLV based on the TLV characteristics, and the motion estimation information, the motion estimation information including motion estimation between the first lower resolution image and the second lower resolution image and motion estimation errors between the first lower resolution image and the second lower resolution image.

3. The method of Claim 2, wherein determining the location includes localizing TLV artifacts from pixel-based intensities of the first lower resolution image and the second lower resolution image, pixel-based motion estimation weighted errors, pixel-based motion vectors, delayed image-based decisions, and detected image based TLV and local global TLV.

4. The method of any one of Claims 1 to 3, wherein the edge information comprises chrominance based edge information and a luminance based edge information.

5. The method of any one of Claims 2 to 4, wherein the edge information comprises edge information indicating edges included in the first reduced resolution image and comparative edge information indicating whether an edge included in the first reduced resolution image appeared in a reference image.

6. The method of any one of Claims 2 to 5, wherein extracting the edge information includes comparing a contour strength for a pixel in a selected image and a contour threshold.

7. The method of any one of Claims 1 to 6, wherein the TLV characteristics are determined based at least in part an error estimation.

8. The method of any one of Claims 1 to 7, wherein the TLV characteristics include a value indicating a change in a local mean intensity value in a portion of an image.

9. The method of any one of Claims 1 to 8, wherein the TLV characteristics include a value indicating at least one of a block-based local motion estimation errors and a zero motion estimation error for an image.

10. The method of any one of Claims 1 to 9, wherein modifying the interpolated image comprises:

non-linearly interpolating moving objects included in the image; and correcting frame repetition information for the interpolated image based on detecting a global TLV.

11. An apparatus for correcting temporal luminance variation (TLV) artifacts during frame rate conversion, the apparatus comprising: a TLV detector configured to detect TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation;

a TLV localizer configured to determine the location of TLV artifacts in an interpolated image between the first image and the second image; and

an image modifier configured to modify the interpolated image based on the determination.

12. The apparatus of Claim 11, further comprising:

an edges extractor configured to receive a first reduced resolution image of the first image and to determine edges included in the first reduced resolution image;

a TLV characteristics extractor configured to receive the first reduced resolution image and a second reduced resolution image of the second image and to determine TLV characteristics; and

a local-global TLV detector being coupled with the edge extractor and the TLV characteristics extractor, the local-global TLV detector configured to detect local global TLV based in part on the TLV characteristics,

the TLV localizer being coupled with the local-global TLV detector, the TLV localizer configured to localize TLV artifacts in an interpolated image between the first image and the second image, and

the image modifier being coupled with the TLV localizer, the image modifier configured to modify TLV artifacts in the interpolated image.

13. The apparatus of Claim 12, wherein the edges extractor comprises a chrominance based edges extractor and a luminance based edges extractor.

14. The apparatus of any one of Claims 12 or 13, wherein the edge information comprises edge information indicating edges included in the first resolution image and comparative edge information indicating whether an edge included in the first reduced resolution image appeared in a reference image.

15. The apparatus of any one of Claims 12 to 14, wherein the edges extractor is configured to determine edges based at least in part on a comparison between a contour strength for a pixel in a selected image and a contour threshold.

16. The apparatus of any one of Claims 11 to 15, wherein the TLV characteristics are determined based on the received first image, the received second image, and an error estimation.

17. The apparatus of any one of Claims 11 to 16, wherein the TLV characteristics include a value indicating a change in a local mean intensity value in a portion of the image.

18. The apparatus of any one of Claims 11 to 17, wherein the TLV characteristics include a value indicating at least one of a block-based local motion estimation errors and a zero motion estimation error for the image.

19. The apparatus of any one of Claims 11 to 18, wherein localization of the TLV artifact near a pixel is based on luminance information for the pixel, motion vector information for the pixel, and motion estimation errors for the pixel.

20. The apparatus of any one of Claims 11 to 19, wherein modifying TLV artifacts comprises:

non-linearly interpolating moving objects included in the image; and correcting frame repetition information for the interpolated image based on detecting a global TLV.

21. A computer readable storage medium comprising instructions executable by a processor of an apparatus, the instructions causing the apparatus to:

detect TLV between a first image and a second image during based on factors based on frame rate conversion information, the factors including edge information, TLV characteristics, and motion estimation information;

determine the location of TLV artifacts in an interpolated image between the first image and the second image; and

modify the interpolated image based on the determination.

22. An apparatus for correcting temporal luminance variation (TLV) artifacts frame rate conversion, the apparatus comprising:

means for TLV detection configured to detect TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation;

means for TLV localization configured to determine the location of TLV artifacts in an interpolated image between the first image and the second image; and

means for modifying an image configured to modify the interpolated image based on the determination.

Description:
TEMPORAL LUMINANCE VARIATION DETECTION

AND CORRECTION FOR HIERARCHICAL LEVEL FRAME RATE

CONVERTER

BACKGROUND

Field

[0001] The present invention relates to reduction of motion compensation artifacts in a standard or high resolution image interpolation, more specifically to temporal luminance variation.

Background

[0002] Image interpolation based on motion compensation is a technique for frame rate conversion (FRC). FRC which may be utilized to increase the refresh rate in video. In such applications, motion appears more fluid and the high refresh rate may yield more suitable images, for example, for display via LCD panels.

[0003] To create an intermediate image at a given temporal position between two existing images, FRC may include motion compensated (MC) interpolation techniques which are based in turn on motion vector (MV) estimation , for short referred as motion estimation (ME). Generally, artifacts can be introduced into the image through this process that may degrade the image in the FRC process.

[0004] Therefore, there is a need to provide local/global detection and better correction of artifacts in performing FRC operations.

SUMMARY

[0005] The systems, methods, and devices of the invention each have several aspects, no single one of which is solely responsible for its desirable attributes. Without limiting the scope of this invention as expressed by the claims which follow, some features will now be discussed briefly. After considering this discussion, and particularly after reading the section entitled "Detailed Description" one will understand how the features of this invention provide advantages that include decreasing the size of a frame header of a data packet, thereby reducing the overhead in transmitting payloads in data packets.

[0006] In one innovative aspect, a method of correcting temporal luminance variation (TLV) artifacts during frame rate conversion is provided. The method includes detecting TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation information. The method further includes determining the location of TLV artifacts in an interpolated image between the first image and the second image. The method also includes modifying the interpolated image based on the determination.

[0007] In another innovative aspect, an apparatus for correcting temporal luminance variation (TLV) artifacts during frame rate conversion is provided. The apparatus includes a TLV detector configured to detect TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation. The apparatus further includes a TLV localizer configured to determine the location of TLV artifacts in an interpolated image between the first image and the second image. The apparatus also includes an image modifier configured to modify the interpolated image based on the determination.

[0008] In a further innovative aspect, a computer readable storage medium comprising instructions executable by a processor of an apparatus is provided. The instructions cause the apparatus to detect TLV between a first image and a second image during based on factors based on frame rate conversion information, the factors including edge information, TLV characteristics, and motion estimation information. The instructions cause the apparatus to determine the location of TLV artifacts in an interpolated image between the first image and the second image. The instructions cause the apparatus to modify the interpolated image based on the determination.

[0009] An additional innovative apparatus for correcting temporal luminance variation (TLV) artifacts during frame rate conversion is provided. The apparatus includes means for TLV detection configured to detect TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation. The apparatus includes means for TLV localization configured to determine the location of TLV artifacts in an interpolated image between the first image and the second image. The apparatus also includes means for modifying an image configured to modify the interpolated image based on the determination.

[0010] These and other implementations consistent with the invention are further described below with reference to the following figures. BRIEF DESCRIPTION OF THE DRAWINGS

[0011] Figure 1 illustrates a functional block diagram of an exemplary video encoding and decoding system.

[0012] Figure 2 illustrates a functional block diagram of an example of a temporal luminance variation (TLV) detection and correction apparatus including a two-level hierarchical FRC.

[0013] Figure 3 illustrates a functional block diagram of an example of a TLV detector.

[0014] Figure 4 illustrates a functional block diagram of an example of an edge extractor.

[0015] Figure 5 illustrates a functional block diagram of an example of a block- based characteristic extractor.

[0016] Figure 6 illustrates a functional block diagram of an example of an image- based TLV detector.

[0017] Figure 7 illustrates a functional block diagram of an example of a block- based local-global detector for image -based decision.

[0018] Figure 8 shows a functional block diagram of an example of a motion vector deviation calculator.

[0019] Figure 9 illustrates a functional block diagram of an example of a local- global TLV detector.

[0020] Figure 10 illustrates a functional block diagram of an example of a TLV correction circuit.

[0021] Figure 11 illustrates a process flow diagram for an example of a method of correcting temporal luminance variation artifacts during frame rate conversion.

[0022] Figure 12 illustrates a functional block diagram of an example apparatus for correcting the temporal luminance variation (TLV) artifacts in an interpolated image between a first image and a second image.

[0023] In the figures, to the extent possible, elements having the same or similar functions have the same designations. Furthermore in accordance with mathematical notation conventions, in the figures and/or description below bold typeface or arrow notation (e.g. ΰ ) may be used to indicate vectors. DETAILED DESCRIPTION

[0024] Detecting and correcting visual artifacts in image data can be a resource intensive process. In a frame rate conversion device including multiple hierarchical levels, various information elements may be generated which can be used to identify and correct visual artifacts. One example of an artifact that may be detected and corrected is temporal luminance variation. By leveraging information generated as part of the frame control process, local (e.g., pixel or block level) TLV and global (e.g., whole image) TLV may be detected and corrected more efficiently than in systems where the detection and correction are not provided the information from the frame rate conversion. Described herein are several characteristics which are generated based on information from frame rate conversion which may be used to indicate if TLV is present, where TLV may be located, and how the TLV may be corrected.

[0025] In the following description, specific details are given to provide a thorough understanding of the examples. However, it will be understood by one of ordinary skill in the art that the examples may be practiced without these specific details. For example, electrical components/devices may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, such components, other structures and techniques may be shown in detail to further explain the examples.

[0026] It is also noted that the examples may be described as a process, which is depicted as a flowchart, a flow diagram, a finite state diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, or concurrently, and the process can be repeated. In addition, the order of the operations may be re-arranged. A process is terminated when its operations are completed. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a software function, its termination corresponds to a return of the function to the calling function or the main function.

[0027] Those of skill in the art will understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof. [0028] Various aspects of embodiments within the scope of the appended claims are described below. It should be apparent that the aspects described herein may be embodied in a wide variety of forms and that any specific structure and/or function described herein is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.

[0029] Figure 1 illustrates a functional block diagram of an exemplary video encoding and decoding system. As shown in Figure 1, system 10 includes a source device 12 that may be configured to transmit encoded video to a destination device 16 via a communication channel 15. Source device 12 and destination device 16 may comprise any of a wide range of devices, including mobile devices or generally fixed devices. In some cases, source device 12 and destination device 16 comprise wireless communication devices, such as wireless handsets, so-called cellular or satellite radiotelephones, personal digital assistants (PDAs), mobile media players, or any devices that can communicate video information over a communication channel 15, which may or may not be wireless. However, the techniques of this disclosure, which concern the detection and correction of temporal luminance variation from reference images or video, may be used in many different systems and settings. Figure 1 is merely one example of such a system.

[0030] In the example of Figure 1, source device 12 may include a video source 20, video encoder 22, a modulator/demodulator (modem) 23 and a transmitter 24. Destination device 16 may include a receiver 26, a modem 27, a video decoder 28, and a display device 30. In accordance with this disclosure, video encoder 22 of source device 12 may be configured to encode a sequence of frames of a reference image. The video encoder 22 may be configured to encode additional information associated with the images such as 3D conversion information including a set of parameters that can be applied to each of the video frames of the reference sequence to generate 3D video data. Modem 23 and transmitter 24 may modulate and transmit wireless signals to destination device 16. In this way, source device 12 communicates the encoded reference sequence along with any additional associated information to destination device 16.

[0031] Receiver 26 and modem 27 receive and demodulate wireless signals received from source device 12. Accordingly, video decoder 28 may receive the sequence of frames of the reference image. The video decoder 28 may also receive the additional information which can be used for decoding the reference sequence.

[0032] Source device 12 and destination device 16 are merely examples of such coding devices in which source device 12 generates coded video data for transmission to destination device 16. In some cases, devices 12, 16 may operate in a substantially symmetrical manner such that, each of devices 12, 16 includes video encoding and decoding components. Hence, system 10 may support one-way or two-way video transmission between video devices 12, 16, e.g., for video streaming, video playback, video broadcasting, or video telephony.

[0033] Video source 20 of source device 12 may include a video capture device, such as a video camera, a video archive containing previously captured video, or a video feed from a video content provider. As a further alternative, video source 20 may generate computer graphics-based data as the source video, or a combination of live video, archived video, and computer-generated video. In some cases, if video source 20 is a video camera, source device 12 and destination device 16 may form so-called camera phones or video phones. In each case, the captured, pre-captured or computer- generated video may be encoded by video encoder 22. As part of the encoding process, the video encoder 22 may be configured to implement one or more of the methods described herein, such as temporal luminance detection and/or correction for hierarchical frame rate conversion. The encoded video information may then be modulated by modem 23 according to a communication standard, e.g., such as code division multiple access (CDMA) or another communication standard, and transmitted to destination device 16 via transmitter 24. Modem 23 may include various mixers, filters, amplifiers or other components designed for signal modulation. Transmitter 24 may include circuits designed for transmitting data, including amplifiers, filters, and one or more antennas.

[0034] Receiver 26 of destination device 16 may be configured to receive information over channel 15. Modem 27 may be configured to demodulate the information. Again, the video encoding process may implement one or more of the techniques described herein such as temporal luminance detection and/or correction for hierarchical frame rate conversion. The information communicated over channel 15 may include information defined by video encoder 22, which may be used by video decoder 28 consistent with this disclosure. Display device 30 displays the decoded video data to a user, and may comprise any of a variety of display devices such as a cathode ray tube, a liquid crystal display (LCD), a plasma display, an organic light emitting diode (OLED) display, or another type of display device.

[0035] In the example of Figure 1, communication channel 15 may comprise any wireless or wired communication medium, such as a radio frequency (RF) spectrum or one or more physical transmission lines, or any combination of wireless and wired media. Accordingly, modem 23 and transmitter 24 may support many possible wireless protocols, wired protocols or wired and wireless protocols. Communication channel 15 may form part of a packet-based network, such as a local area network (LAN), a wide- area network (WAN), or a global network, such as the Internet, comprising an interconnection of one or more networks. Communication channel 15 generally represents any suitable communication medium, or collection of different communication media, for transmitting video data from source device 12 to destination device 16. Communication channel 15 may include routers, switches, base stations, or any other equipment that may be useful to facilitate communication from source device 12 to destination device 16. The techniques of this disclosure do not necessarily require communication of encoded data from one device to another, and may apply to encoding scenarios without the reciprocal decoding. Also, aspects of this disclosure may apply to decoding scenarios without the reciprocal encoding.

[0036] Video encoder 22 and video decoder 28 may operate consistent with a video compression standard, such as the ITU-T H.264 standard, alternatively described as MPEG-4, Part 10, and Advanced Video Coding (AVC). The techniques of this disclosure, however, are not limited to any particular coding standard or extensions thereof. Although not shown in Figure 1, in some aspects, video encoder 22 and video decoder 28 may each be integrated with an audio encoder and decoder, and may include appropriate MUX-DEMUX units, or other hardware and software, to handle encoding of both audio and video in a common data stream or separate data streams. If applicable, MUX-DEMUX units may conform to a multiplexer protocol (e.g., ITU H.223) or other protocols such as the user datagram protocol (UDP).

[0037] Video encoder 22 and video decoder 28 each may be implemented as one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), discrete logic circuitry, software executing on a microprocessor or other platform, hardware, firmware or any combinations thereof. Each of video encoder 22 and video decoder 28 may be included in one or more encoders or decoders, either of which may be integrated as part of a combined encoder/decoder (CODEC) in a respective mobile device, subscriber device, broadcast device, server, or the like.

[0038] A video sequence typically includes a series of video frames. Video encoder 22 and video decoder 28 may operate on video blocks within individual video frames in order to encode and decode the video data. The video blocks may have fixed or varying sizes, and may differ in size according to a specified coding standard. Each video frame may include a series of slices or other independently decodable units. Each slice may include a series of macroblocks, which may be arranged into sub-blocks. As an example, the ITU-T H.264 standard supports intra prediction in various block sizes, such as 16 by 16, 8 by 8, or 4 by 4 for luma components, and 8 by 8 for chroma components, as well as inter prediction in various block sizes, such as 16 by 16, 16 by 8, 8 by 16, 8 by 8, 8 by 4, 4 by 8 and 4 by 4 for luma components and corresponding scaled sizes for chroma components. Video blocks may comprise blocks of pixel data, or blocks of transformation coefficients, e.g., following a transformation process such as discrete cosine transform or a conceptually similar transformation process.

[0039] Macroblocks or other video blocks may be grouped into decodable units such as slices, frames or other independent units. Each slice may be an independently decodable unit of a video frame. Alternatively, frames themselves may be decodable units, or other portions of a frame may be defined as decodable units. In this disclosure, the term "coded unit" refers to any independently decodable unit of a video frame such as an entire frame, a slice of a frame, a group of pictures (GOPs), or another independently decodable unit defined according to the coding techniques used.

[0040] In some implementations, motion estimation (ME) may be included as part of the video processing. For example, motion estimation may be achieved as described in U.S. Patent Application No. 12/338,960 entitled "Motion Estimation with an Adaptive Search Range," which is herein incorporated by reference in its entirety. Estimating the motion vector (MV) may be based on block matching techniques, which divides an existing image into many blocks and then estimates for each block its own displacement on the second existing image. Some implementations include a block matching technique which based at least in part on the metric Sum of the Absolute Differences (SAD) between the given block in one image and a displaced block in the other image. ME may be performed by selecting the displacement giving the minimum SAD.

[0041] As an example, the forward estimated motion vector (MEF) can indicate the displacement of a given block in an image (I n-1 ) with respect to a matched block in a subsequent image (I n ). The forward ME may generate a minimized sum of absolute differences errors (SADF) for the full search estimated MEF. Similarly, the backward estimated motion vector (MEB) can indicate the displacement of a given block in the image I n with respect to a matched block in the image (I n-1 ). MEB can also minimize the sum of absolute error SADB. Mathematically, SADF, SADB, and their forward and backward MVs (FMV and BMV respectively) can be defined as follows:

SAD (u) = minr -l-∑ |/„_i(SW + y) - I n (W + y + k) (1) k n(R) yeR

SADB(u) = (2)

MEF( ΰ ) = k * minimizing SADF in a given search zone (3)

MEB( ΰ ) = k * minimizing SADB in a given search zone. (4)

In Equations (l)-(4), R denotes the current window region, n( R ) is the region size, y are the coordinates of pixels within the window R , and k is a displacement in a given search zone. The SADF( u ), the SADB(u ), and the corresponding MVs can be associated to a small window W centralized inside the reference region R . In some implementations, the window W size is WxW = 8x8. The variable ΰ represents the coordinates of the current block W; ΰ = Integer[ x/W] for a pixel of coordinates x in the current block.

[0042] Some implementations may also include a zero motion estimation errors (SAD0(u )) which may be determined based upon the following normalized sum of the absolute difference between two images:

1

SAD0(5) = ∑ |l n _i(SW + y) - I n (5W + y)| (5) n(R) yeR The processing of supposedly still pictures may be based at least in part on the metric SADO.

[0043] In the above related references, the MV can be estimated more precisely at pixel resolution instead of block based resolution. However, similar to the Equations (1), (2) or (5), the estimate may be based on the absolute difference between the two respective intensities of the past image I n-1 and the present image I n .

[0044] The ME techniques based on the absolute difference work well for some images. In the case where the displacement model for an image is no longer valid, the ME may be hindered by occlusion or temporal luminance variation (TLV) between two images. Systems and methods to address the occlusion aspect and its associated halo effects may be included with the systems and methods described herein.

[0045] The TLV between the two images can have multiple causes. For example, live arena or stadium events, like a hockey game or concert, can be rapidly illuminated by various individual camera flashes from spectators. Another example, when an object is moving under a tree, the light on the object can be changed partly or totally in function of the leaves' shadow. Sunrise and sunset scenes captured with long camera exposure are also examples of local or global temporal luminance transitions. Moreover, these local variations can create spurious contours at their borders.

[0046] One method of preventing the variation lighting conditions, for a temporal DPCM compression, includes equalizing the mean intensity value of the blocks in ME equations. However, when the lighting is normal or not temporally varying, both mean intensity and high frequency components of a block may be two influential factors in ME. In such implementations, removing the mean intensity implies the ME is based only on high frequency or edge information. Furthermore, the mean intensity calculation on various blocks can introduce additional cost (e.g., computation, time, processor) to perform the motion estimation. Additionally, some implementations perform temporal prediction error in a DPCM can be received via coded and transmitted information which is re-introduced in feedback rather than incorporated into the processing stream such as frame rate control. In a FRC the situation, a temporal ME error will result directly visible artifacts in the output pictures.

[0047] Some implementations consider a specific case of varying luminance of a frame- wide change occurring to the pixels. Such a frame- wide change may include a scene change or a fade-in fade-out of two different scenes stitched together temporally. The detection method may be based on the histogram of the inter frame difference and the correction is a frame-wide mixing between the results of MC interpolation and temporal interpolation or even frame repetition. The mixing level may then be a function of the detected reliability provided from the image difference histogram.

[0048] In the case of frame-wide change, the histogram of the image difference may generally be well-defined including a dominating peak. However, in the general case of temporal luminance variation, the previous and present scenes can be very similar except for some local changes. The suggested method of image difference histogram may be inadequate to detect such fine variations within a frame. Moreover, regarding the correction, if the luminance variation is constrained in some small locations, the frame- wide mixing can result in visible motion judder in the whole image when MV are large and inversely, when MV is small, some eventual blurs in the resulted image.

[0049] Some implementations may include hierarchical frame rate conversion for use in, for example, high-resolution television. Such approaches may address temporal luminance variation by using two existing images and a temporary interpolated image. However, the temporary interpolated image may not be adapted for improved of temporal luminance variation detection.

[0050] In an image interpolation, the exhaustive ME can consume a high number of operations and calculations when operating directly on the original resolution images such as ultra definition UD or high definition HD images. Such image processors may not have sufficient bandwidth or capacity to fully implement the operation. To reduce the complexity, various ME techniques may be included in a video encoding or decoding device. Once such technique is a hierarchical approach which may be used not only for high definition images, but is also suited for use with low or medium resolution image formats such as QCIF, CIF, or Standard definition SDTV.

[0051] The hierarchical process may include a tree decomposition of an original image into many resolution level sub-images. The original resolution image or first level image after a first filtering and decimation may yield a lower resolution sub image determining the 2 nd level image. The filtering and decomposition process continues for the obtained sub-image to provide a 3 level sub-image, and so on in the pyramidal hierarchical decomposition.

[0052] Without loss of any generality, the original input images may be considered HDTV images.

[0053] In some embodiments, a method of detecting temporal luminance variation between a first image and a second image includes reducing the first image and the second image to form a first lower resolution image and a second lower resolution image; extracting new edges from the first lower resolution image and still edges from a frame delay of detected edges image; extracting TLV characteristics based on the block- based intensities and the block-based zero-motion estimation error; synthesizing the block-based TLV characteristics and the detected edges to form a global detection of temporal luminance variation possibility in an image; detecting local and global corrections in an image from block-based motion vectors, block-based motion estimation errors (SAD) and detected characteristic; detecting a flashing scene from an other detected characteristic; delaying the image-based decision to timely match to the current image; localizing TLV artifacts from the pixel-based intensities of the first lower resolution image and the second lower resolution image, the pixel-based motion estimation weighted errors, the pixel-based motion vectors and the delayed image-based decisions; correcting local artifacts by a non-linear temporal interpolation based on the located TLV and; correcting global artifacts by frame repeating based on the global correction decision or the detected flashing scene.

[0054] Figure 2 illustrates a functional block diagram of temporal luminance variation (TLV) detection and correction apparatus including a two-level hierarchical FRC. Figure 2 includes a frame rate converter (FRC) with two hierarchical levels for the original HDTV inputs and a TLV detection and correction unit. The apparatus shown may be included in, for example, the video encoder 22 or the video decoder 28 as shown in Figure 1.

[0055] The FRC may include two dimensional filters and decimations 201, forward and backward motion estimators (ME) and motion vector filtering (MVF) 202, frame delays 203, motion vector selectors (MVS) 204, motion vector post filtering for halo reduction (MVPF) 205 and, high definition motion estimation (HDME) and motion compensated interpolation (HDMC) 206. The frame delays 203 may be included to divide as equitably as possible the timing required for ME and for HDMC operations.

[0056] In short, the filters and decimators UxU 201 provide, based on the image inputs I IH -I( HD) and I n ( x H x > two corresponding reduced resolution images RI n+1 ( x ) and RI n ( x ) which are provided to a TLV detector 207. RI n+ i and RI n denote respectively the future and the present reduced images. Moreover, if the notation x HD represents the current pixel coordinates (column, row) in the high resolution image, then x is the pixel coordinates in the reduced resolution image. The relation between x and X HD is given by x = Integer[ x HD/U] = [ X HE U] . The ME and MVF 202 provides block-based information such as SADO(u ), forward and backward MV MEF( u ), MEB( u ) and their associated difference metrics SADF( u ), SADB( u ). For simplicity, the current block coordinates notation ΰ can be sometimes omitted in this disclosure if there is no ambiguity. The frame delays 203 may provide two reduced resolution images RI n ( x ) and RI n _i( x ) respectively for the present and the past reduced images. The motion vector selector (MVS) 204 may generate the pixel-based motion vector estimations WERMF( x ) and WERMB( x ). The halo reduction MVPF 205 provide may be configured to the pixel-based forward and backward motion vectors Fs( x ) and B s ( x ) between the images RI n ( x ) and RI n _i( x ) of reduced resolution. The HDME and HDMC 106 may be configured to generate an originally high resolution motion compensated image I m c,a( x HT>) in which the subscript alpha, a, denotes the relative distance of the interpolated image I m c,a( x HT>) in respect to the position of the present image Ι η ( χ Ητ>)· Alpha equal to 1 may correspond to the past image Ι η _ι( χ Ητ>) position. The interpolated image I m c,a( x HT>) may be provided to the TLV correction circuit 208.

[0057] As illustrated by Figure 2, the local and global TLV detector 207 obtains the cited signals as inputs and may generate the global repeat frame correction (RFC n ) and the pixel based TLV local correction (TLVLOC n ( x )). The TLV correction circuit 208 may obtain the RFC n + TLVLOC n ( x ), convert them to the original (HD) resolution, and modify the image by appropriately combining the three images I n ( x HD), In-i( x HD) and Imc,a( x HD) to provide finally I mc+ 2,a( x HD) at the interpolation position alpha, a.

[0058] The detection of temporal luminance variation occurring between two consecutive input images can be based on one or more of the following characteristics: (a) the local mean intensity value in a flat region can change, (b) the block-based local motion estimation errors SAD, as well as the zero motion estimation error SAD0, may become higher in a large part of the picture, (c) the pixel based motion estimation errors WERMF, WERMB also may be higher in a large part of the picture, (d) the MV field may become no more smooth but contains erratic or irregular variations, and (e) at least for the pixel-based case, the motion vector lengths can become big in regions of temporal luminance variation.

[0059] Another characteristic which may be considered during TLV detection is the contour in the images. Even for a slowly moving or still picture, the number of the contour pixel in the present image can exceed notably the number of the still contour pixels when in presence of TLV. However, to differentiate the moving picture and the TLV effect, the number of the present image contour pixels should be limited to some factor of the still contour number. Beyond that limit, the case of normal moving pictures may be assumed and it can be handled by the FRC. This consideration may be useful for reducing false detections.

[0060] The following table summarizes the detection parameters considered in some implementations .

[0061] Figure 3 illustrates a functional block diagram of an example of a TLV detector. The TLV detector 207 may be configured to detect TLV artifacts for images of reduced resolution. The TLV detector 207 shown in includes two parts: a first part configured to perform block-based analysis; and a second part configured to perform pixel-based analysis.

[0062] A block-based analyzer 310 may include an edge extractor 301, a block characteristic extractor 302, an image based TLV detector 303, and a local global detector 304. The block-based analyzer 310 may be configured to provide a frame- based global detection where TLV occurs between the two consecutive input images. In some implementations, if global TLV is detected, the TLV may affect the whole (interpolated) frame or only some parts of the interpolated frame. The pixel-based analyzer 305 may be configured to determine the locations of TLV artifacts in a picture.

[0063] The reduced resolution color images RI n+ i( x ) and the monochrome luminance RY n+ i( x ) may be provided as inputs to the edge extractor 301. The color image RI n+ i( x ) may include values indicating the three standard colors (R, G, B). The processing of color animated clips may be based at least in part on the color image. Combining both color image RI n+1 ( x ) and luminance image RY n+ i( x ) can provide a robust contour detection. The edge extractor 301 outputs are NCC n+ i([ x/2]), SCC n+ i([ /2]), NYC n+ i([ x/2]), SYC n+ i([ x/2]), respectively for new color contour, still color contour, new luminance contour, and still luminance contour. As used herein, the subscript n+1 indicates the future frame timing n+1. The variable argument [x/2] after a 2x2 grouping and down sampling will be explained later in the description of Figure 4. The above four edge extractor 301 outputs may be provided to the image based TLV detector 303.

[0064] The reduced resolution luminance future image RY n+1 ( x ), the previous version of the image RY n ( x ), and the zero-motion estimation error SAD0(u ) may be provided as inputs to the block characteristic extractor 302. The block characteristic extractor 302 may be a block-based characteristic extractor. In some implementations, the block characteristic extractor 302 may be configured to extract more than one characteristic. The block characteristic extractor 302 shown in Figure 3 is configured to provide four TLV characteristics LBC0(u ), LBCl(u ), LBC2(u ), and LBF( u ) where LBF represents a local block-based flat zones detected in the inter-image difference, and LBC denotes a local block-based TLV characteristic. As shown, the characteristics are provided to the image based TLV detector 303 and the local global detector 304. In some implementations, the block characteristic extractor 302 may be configured to provide the characteristics to other elements such as the TLV localizer 305.

[0065] The image-based TLV detector 303 may be configured to determine whether the whole image includes TLV artifacts. The determination (FTLVD n ) may be based at least in part on information provided by the edge extractor 301. For example, as shown in Figure 3, NCC n+ i([ x/2]), SCC n+ i([ x/2]), NYC n+ i([ x/2]), and SYC n+ i([ x/2]) are provided to the image based TLV detector 303.

[0066] The image based TLV detector 303 may be configured to base the determination on information received from the block characteristic extractor 302. As shown in Figure 3, two block-based characteristics maps LBCl( u ) and LBF( u ) are provided to the image based TLV detector 303.

[0067] The image-based determination generated by the image-based TLV detector 303 may be provided as an enable signal. This signal may be provided to the pixel- based TLV localizer 305. The image-based TLV detector 303 may also be configured to provide an absolute size TLV detection signal (ATLVD n ). As shown in Figure 3, the absolute size TLV detection signal may be provided to the local global detector 304. [0068] The local-global TLV detector 304 may be configured to assess the TLV for the present image as well as the TLV relative to the previous image. As shown in Figure 3, local-global TLV detector 304 may provide two output signals, RFC n indicating global TLV artifacts affecting the whole image. This indication may be used to signify that repeat frame correction may be suitable for the image. The second output signal, LocTG n> indicates whether, between the present and the previous images, the TLV artifacts are local or if the TLV artifacts are global.

[0069] The local-global TLV detector 304 may be configured to generate the output signals based at least in part on the block characteristic map LBC0( u ), LBC2( u ) provided by the block characteristic extractor 302. The local-global TLV detector 304 may be configured to generate the output signals based at least in part on the block- based forward and backward motion vector estimations MEF(u ) and MEB(u ) as well as the associated forward and backward motion estimation errors SADF(u ) SADB(u ) provided by the motion estimator and vector filter (ME, MVF) 202. As discussed above, the local-global TLV detector 304 may also base the determination on the ATLVD n signal provided by the image based TLV detector 303. In some implementations, using LocTG n and ITLVD n may be provided to for pixel-based TLV analysis. As discussed, by detecting TLV at the pixel level, the number of false TLV detections may be reduced as compared with whole image or block-based TLV detection.

[0070] Accordingly, as shown in Figure 3, ITLVD n and LocTG n may be provided to the pixel-based TLV localizer 305. The pixel-based TLV localizer 305 may also obtain other pixel-based information such as images RY n ( x ), RY n-1 ( x ), forward backward motion vectors Fs( x ), Bs( x ) as well as their associative ME errors WERMF( x ) and WERMB( x ). Based on this information, the pixel-based TLV localizer 305 may locate where the TLV artifacts and generate a binary indication map TVLLOC( x ) including the location information. The TLVLOC( x ) map may be provided to the TLV correction circuit 208 for final local TLV correction.

[0071] Figure 4 illustrates a functional block diagram of an example of an edge extractor. The edge extractor 301 may be configured to detect edges based on obtained inputs. As shown in Figure 4, the inputs may include the luminance image RY n+ i( x ) and the color image RI n+ i( x ), which includes values indicating the three standard colors (R, G, B). As illustrated in Figure 3, the edge extractor 301 can include multiple branches for edge extraction. As shown, the edge extractor 301 includes a chrominance based edge extractor 430 and a luminance based edge extractor 460.

[0072] The chrominance based edge extractor 430 may include a color selector 401. The color selector 401 may be configured to combine the three color images (R, G, and B) in one monochrome image CS( x ). The color selector 401 may have several possible solutions depending on subsequent algorithms. In some implementations, the color selector 401 may be configured to select colors based on 8-bit video. In such implementations, the color selector 401 may be configured to determine the monochrome image CS( x ) based at least in part on Equation 6.

i/(R(x) > Th)or(R(x) < Tl) max(R(x), G(x), 5(x),238 - min(R(x), G(x), 5(x))) or(G(x) > Th)or(G(i) < Tl)

) < Tl) where Th and 77 are configurable high and low threshold values.

In some implementations, the color selector 401 may be configured to determine the monochrome image CS( x ) based at least in part on Equation 7.

CS( x ) = max(R( x ), G( x ), B( x ), Y( x )) (7)

The selected image signal CS n+ i( x ) for the input image frame (n+1) may be provided to one or more Sobel contour extractors. As shown in Figure 4, the chrominance based edge extractor 430 includes two Sobel contour extractors, a chrominance horizontal Sobel contour extractor 402 and a chrominance vertical Sobel contour extractor 403.

[0073] The luminance based edge extractor 460 may also include one or more Sobel contour extractors. As shown in Figure 4, the luminance based edge extractor 460 includes two Sobel contour extractors, a luminance horizontal Sobel extractor 404 and a vertical luminance Sobel extractor 405. The luminance horizontal Sobel extractor 404 and the vertical luminance Sobel extractor 405 may be configured to obtain the luminance image signal RY n+ i( x ) for contour extraction.

[0074] In some implementations, the contour extraction process may be similar for the two signals CS n+ i( x ) and RY n+ i( x ). The Sobel contour extractors may be included to reduce the effect of noise in the image on the edge detection. It will be appreciated that other methods of noise reduction may be included without departing from the scope of the disclosure. The horizontal and vertical Sobel contour extractors may reduce the noise based at least in part on Sobel horizontal and vertical masks given respectively as:

Sobelj j /8 (8)

The output of each Sobel extractor 402, 403, 404, and 405 may be provided to the absolute value circuits 406, 407, 408, and 409 respectively. The results may be provided to adder units 418 and 419, respectively. The value produced by the adder units 418 and 419 indicate a Sobel contour strengths SA( x ) and SB( x ) for the chrominance and luminance images, respectively. A pixel x is detected as contour if its local contour strength SA( x ) is bigger than some threshold. The decision device is the comparator 410. For color selected image CS n+ i( x ), the detected contour map is denoted as SC n+ i( x ).

[0075] The chrominance based edge extractor 430 may include a comparator 410. The comparator 410 may be configured to obtain the detected contour strengths SA( x ). The comparator 410 may then compare the strength of each pixel x with a threshold value (ThSA). As shown in Figure 4, the value of 13 is used. In some implementations, ThSA may be 3, 20, or 2.1. As shown, if the SA value for pixel x is greater than the threshold ThSA, the pixel x may be identified as a contour. In some implementations, a second threshold may be included to provide a range for pixel values that may be identified as a contour. In such implementations, a pixel x may be identified as a contour if the strength value falls within the specified contour strength threshold range.

[0076] The comparator 410 may provide the pixel identification information via a binary contour map SC n+ i( x ). The binary contour map may be provided in the dimension of reduced images. In some implementations, the given dimension may be reduced. The chrominance based edge extractor 430 may include a reduction unit 412 to reduce the binary contour map. For example, the reduction unit 412 may be configured to down sample by a factor, such as 2x2. In some implementations, the reduction may increase processing efficiency. Since the contour map in Figure 4 is binary, the reduction unit 412 may be configured to perform "or 2x2 and decimation 2x2" as shown in Equation 9.

1 1

Out([x/2]) = U U In([c/2]2 + p,[r/2]2 + q) (9) p=0q=0 where x = (c, r) indicates the coordinate values of the current pixel.

[0077] The reduction unit 412 may generate an output NSC n+1 ([ x/2]) representing the reduced new Sobel contour map for the selected color image (n+1). To detect the still contour, the map NSC n ([ x/2]) may be delayed one frame by frame delayer 414 which provides the delayed version output signal NSC n ([ x/2]). The signal NSC n ([ x/2]) may represent the Sobel contour map of the previous image. Combining the new contour map and the previous contour map via an AND gate 416 yields SSC n+ i([ x/2]) representing the still contour map, pixel by pixel but in half resolution, in both image RC n+ i( x ) and RC n ( x ). According the edges detected in the new Sobel contour map represent images which are included in the current image (e.g., new edges). The edges identified in the still contour map represent edges included in the current image that were also included in the previous image (e.g., still edges).

[0078] Returning the luminance based edge extractor 460, the luminance based edge extractor 460 may also include a comparator 411. The comparator 411 be configured to obtain the detected contour strengths SB( x ). The comparator 411 may then compare the strength of each pixel x with a threshold value (ThSB). As shown in Figure 4, the value of 12 is used. In some implementations, ThSB may be 4, 19, or 1.1. The thresholds ThSA and ThSB are generally different. However, in some implementations, ThSB may be the same as ThSA. Furthermore, one or both thresholds may be adaptive. For example, a threshold can be based on a color selection technique included in the system.

[0079] As shown, if the SB value for pixel x is greater than the threshold ThSB, the pixel x may be identified as a contour. In some implementations, a second threshold may be included to provide a range for pixel values that may be identified as a contour. In such implementations, a pixel x may be identified as a contour if the strength value falls within the specified contour strength threshold range.

[0080] The comparator 411 may provide the pixel identification information via a binary contour map SY n+1 ( x ). The luminance based edge extractor 460 may include a reduction unit 413 which is similarly configured as reduction unit 412. For example, the reduction unit 413 may be configured to down sample by a factor, such as 2x2. Since the contour map in Figure 4 is binary, the reduction unit 413 may be configured to perform "or 2x2 and decimation 2x2" as shown above in Equation 9.

[0081] The reduction unit 413 may generate two contour maps NSY n ([ x/2]) and SSY n+ i([ x/2]) for the new and the still contours of luminance images respectively. Similar to the chrominance based edge extractor 430, the luminance based edge extractor 460 may include a frame delayer 415. To detect the still contour, the map NSY n ([ x/2]) may be delayed one frame by frame delayer 415 which provides the delayed version output signal NSY n ([ x/2]). The signal NSY n ([ x/2]) may represent the Sobel contour map of the previous image. Combining the new contour map and the previous contour map via an AND gate 417 yields SSY n+ i([ x/2]) representing the still contour map, pixel by pixel but in half resolution, in both image RY n+ i( x ) and image

[0082] Figure 5 illustrates a functional block diagram of an example of a block- based characteristic extractor. The block-based characteristic extractor 302 may be configured to extract characteristics based on a TLV or temporal gradient (TG) of luminance. The extraction process may be based on two parameters, namely the luminance input images RY n+1 ( x ), RY n ( x ) and the zero-motion estimation error SAD0(u ). These parameters partially indicate the characteristics (a) and (b) in Table 1 above.

[0083] For block-based resolution, the block-based characteristic extractor 302 may include two reduction units, reduction unit 503 and reduction unit 504. The reduction unit 503 may be configured to filter and/or down sample the image RY n ( x ). For example, the reduction unit 503 may be configured to filter and down sample the image based at least in part on the block size factor WxW where W is the block window dimensions. In some implementations, W may be 8. The reduction unit 504 may be similarly configured to filter and/or down sample the image RY n+1 ( x ). The respective outputs of reduction unit 503 and reduction unit 504 are denoted as BY n ( u ) and BY n+ i(u ). By this notation, BY designates block-based luminance.

[0084] The block-based characteristic extractor 302 may include an adder 519 configured to find the difference between the two image signals. The different may indicate a temporal luminance variation between the images. The block based characteristic extractor 302 may include additional filters to process the difference value. As shown in Figure 5, a low-pass (LP) filter 505 and a high-pass (HP) filter 506 are included. The low-pass filter 505 may be configured to perform a 3x3 low-pass filtering of the input signal. The high-pass filter 506 may be configured to perform a complementary (e.g., 3x3) high-pass filtering of the input signal. The utilized LP and HP impulse responses may be respectively given by Equation 10.

[0085] The low-pass filter 505 and high-pass filter 506 outputs may be provided to the absolute value circuits 507, 508, respectively. The value generated by absolute value circuit 507 may be provided to a mean low-pass (MeLP) filter 509. The value generated by the absolute value circuit 508 may be provided to a second mean low-pass filter 510. In some implementations, mean low-pass filter 509 and the second mean low-pass filter 510 may be configured to apply an impulse response as given, for example, by Equation 11.

In some implementations, this may help reduce compression artifacts such as the so- called Gibbs effect. It will be appreciated that other arrangements may be provided to achieve a similar result.

[0086] The mean low-pass filter 509 may be configured to generate an output representing the local mean luminance variation between two input images. This output may be provided to a comparator (CMP) 511. The comparator 511 may be configured to detect a local low frequency luminance variation. In the implementation shown, the comparator 511 may be configured to identify a local low frequency luminance variation based on the mean low-pass filter 509 output value as compared with a threshold value (LT). As shown in Figure 5, the determination is based on whether the output value exceeds the threshold LT. In some implementations, two values may be used to provide a range of threshold tolerance. Furthermore, the threshold may be predetermined or adaptively generated (e.g., based on a characteristic of the image or device settings). As shown in Figure 5, the threshold (LT) value is 40. In some implementations, LT may be 10, 60, or 102.2.

[0087] Similarly, the second mean low-pass filter 510 may be configured to generate an output representing the local mean high frequency luminance variation. This output value may be provided to a second comparator (CMP) 512. The second comparator 512 may be configured to detect a flat zone. The detection may be based at least in part on a comparison of the variation to a second threshold (HT). In the example shown in Figure 5, if the variation value is less than the second threshold HT, a flat zone may be detected. As above, in some implementations, two values may be used to provide a range of threshold tolerance. Furthermore, the second threshold (HT) may be predetermined or adaptively generated (e.g., based on a characteristic of the image, or device settings). As shown in Figure 5, the threshold (HT) value is 120. In some implementations, HT may be 110, 160, or 123.4.

[0088] The block-based characteristic extractor 302 may include a first add and remove (AR) context-based binary filter 513 and a second add and remove (AR) context-based binary filter 514. The first AR context-based binary filter 513 may be configured to consolidate the local low frequency luminance variation provided by the comparator 511. As shown in Figure 5, the AR context-based binary filter 513 may be a 3x3 filter. The second AR context-based binary filter 514 may be configured to consolidate the flat zone detection provided by the second comparator 512. As shown in Figure 5, the second AR context-based binary filter 514 may be a 3x3 filter.

[0089] The first AR context-based binary filter 513 and/or the second AR context- based binary filter 514 may be implemented in a variety of ways. In one implementation, a binary filter may be configured as an AR context-based filter if the filter output s(c,r) is written as:

1, z ∑∑ zz ½ (c,r)≥ Threshold B

s(c,r) = 0, if ^ ^ in u (c,r)≤ Threshold s (12) in(c,.r), otherwise

In such implementations, the binary filter may be defined on a rectangular window centered at the coordinates (c,r) where hii j (c,r) is the filter binary input of relative coordinates (i,j) inside the considered window.

Similarly, a binary filter is said be an add only filter if the Thresholds = 0 as, for example, in Equation 13. z ∑∑ zz ½ (c,r)≥ Threshold E otherwise

If the Thresholds is not specified in the filter block diagram, it implies the threshold is equal to 1, a preset value.

Finally, a binary filter is said a remove only filter if the Thresholds exceeds the window size, as for example, in Equation 14.

[0090] The AR context-based binary filter 513 may be configured to generate an output signal LLV( u ) representing the consolidated block-based detected local mean temporal luminance variation map. The second AR context-based binary filter 514 may be configured to generate an output signal LBF( u ) corresponds meanwhile to the detected map of local block-based flat zone. The two signals LLV( ΰ ) and LBF( ΰ ) may be combined via an AND gate 516 to the first local block-based characteristic LBC0( u ). This signal represents a map of temporal variation zones of said "local mean intensity in flat region." The LBC0(u ) characteristic signal may be provided to the local-global TLV detector 304 for local and/or global TLV decision.

[0091] As shown in Figure 5, the block-based characteristic extractor 302 may also obtain the zero motion estimation error value SAD0( u ). The block-based characteristic extractor 302 may provide this value to a third comparator 501 and a fourth comparator 515. The third comparator 501 may detect high levels of SAD0. As shown in Figure 5, the third comparator 501 may identify SAD0 values between limits defined by two threshold values SP0 and SP1. SP0 and SP1 may be predetermined or adaptively generated (e.g., based on a characteristic of the image or device settings). As shown, SP0 is 15 and SP1 is 260. The detection may be consolidated by a third AR context- based binary filter 502. As shown, the third AR context-based binary filter is a 3x3 filter. In some implementations, the third AR context-based binary filter 502 may have other dimensions such as 2x2, 3x4, or 8x8. The consolidated result may be combined via the AND gate 517 with the signals LLV( u ) and LBF( u ) constitute another local block-based characteristic map LBCl(u ). The characteristic signal LBCl(u ) and the detected local flat zone signal LBF( u ) may be provided to the image-based TLV detector 303. [0092] The fourth comparator 515 may be configured to generate a value indicative of a flashing scene. The fourth comparator 515 may generate the value based at least in part on the zero motion estimation error SADO(u ) as compared with a threshold SP2. In some implementations, the threshold SP2 may be a higher value relative to value of the threshold SP1. For example, in the implementation shown, SP2 is equal to 120. SP0 and SP1 may be predetermined or adaptively generated (e.g., based on a characteristic of the image, based on SP1, or based on device settings). Combining the comparison result together with the signals LLV( u ) and LBF(u ), another local block- based characteristic map LBC2( ΰ ) may be generated. This characteristic information LBC2( ΰ ) may be provided to the local-global TLV detector 304.

[0093] Figure 6 illustrates a functional block diagram of an example of an image- based TLV detector. For frame-based decisions, the image-based TLV detector 303 may include frame-based counters coupled with comparators and combinatory logic units. Each of the six input signals of the image -based TLV detector 303, namely NSC n+1 ([ x/2]), SSC n+1 ([ x/2]), NSY n+1 ([ x/2]), SSY n+1 ([ x/2]), LBCl( u ) and LBF(u ), may be provided to one or more frame-based counters. As shown in Figure 6, the image-based TLV detector 303 includes six frame-based counters 601, 602, 603, 604, 605, and 606, each counter corresponding to an input signal. The values generated by the counters may indicate the pixel numbers in the frame (n+1) of each binary input signal or characteristic associated with the corresponding input signal. As shown in Figure 6, the six result value notations are respectively: NCFC n , SCFC n , NYFC n , SYFC n , TGFC n and FlatFC n . At the end of the frame, these results can be frame-based delayed and correlated to the frame (n) for FRC timing synchronization. .

[0094] Two parameters shown in Figure 6, NCFC n and SCFC n , correspond respectively to a new color contour frame count and a still color contour frame count. The image-based TLV detector 303 may include a first comparator 607 configured to generate a value based at least in part on the NCFC n and SCFC n values. As shown, the first comparator 607 may be configured to compare the values to detect TLV and to differentiate the case of normal moving pictures which can be handled by the FRC. The comparison may include the determination shown in Equation 15.

SCFC > (a * NCFC) (15) where a is a coefficient value. For example, the value of a may be given as 32/64. In some implementations, the value of a may be a different value or adaptively generated (e.g., based on a characteristic of the image or device settings).

[0095] Similarly, the two frame counts NYFC n and SYFC n corresponding respectively to new luminance contour and still luminance contour may be compared by a similarly configured second comparator 608. The second comparator 608 may be configured to include the determination shown in Equation 16.

SYFC > (b * NYFC) (16)

These two decisions from the first comparator 607 and the second comparator 608 may be combined via an OR gate 613 to generate a TLV condition value based on edge information, identified as EDGE n in Figure 6.

[0096] The parameters for temporal gradient region frame count (TGFC n ) and flat region frame count (FlatFC n ) may be provided to a third comparator 609. The third comparator 609 may be configured to compare the temporal gradient region frame count to the flat region frame count for each frame. As shown, the third comparator 609 is configured to determine whether the temporal gradient region frame count for the frame is greater than the flat region frame count.

[0097] The image-based TLV detector 303 may include a fourth comparator 610. The fourth comparator 610 may be configured to obtain and compare a block-based image size (BISize) and the flat region frame count. The BISize may be an absolute value parameter (e.g., positive value). As shown, the fourth comparator 610 may be configured to determine whether the flat region frame count is greater than the block- based image size. A fifth comparator 611 may be included which is similarly configured as the fourth comparator 610. However, the output of the fifth comparator 611 may be provided for a different determination than the output of the fourth comparator 610.

[0098] A sixth comparator 612 may be included. The sixth comparator 612 may be configured to compare the temporal gradient region frame count with the block-based image size. As shown in Figure 6, the sixth comparator 612 is configured to determine whether the TGFC is greater than the block-based image size.

[0099] In some implementations, if the dimensions of the reduced resolution image RI n ( x ) are M R N R , the BISize may be expressed as (M R N R ) / (WW). The comparison relations implemented by the comparators 609, 610, 611 and 612, between TGFC n , FlatFC n and BISize may be respectively summarized by the following Equations (17) - (20):

TGFC > (c * FlatFC) (17)

FlatFC > (tl* BISize) (18)

FlatFC > (t2 * BISize) (19)

TGFC > (d * BISize) (20) where c, tl, t2, and d are constant coefficient values. In the implementation shown, the coefficients b, c, tl, t2 and d are given as b=32/64, c=4/64, tl=59/64, t2=48/64 and d= 15/64 respectively.

The relation shown in Equation (17) indicates that the detected temporal gradient zone (TLV zone), should be superior to some portion of the detected flat zone. The last three relations shown in Equations (18), (19) and (20) generate a value indicating a relationship between the sizes of TGFC and FlatFC relative to the block-based image size.

[00100] The results of the third comparator 609 and the fourth comparator 610 (e.g., relations shown in Equations (17) and (18)), may be combined via a first AND gate 614 to provide a relative TLV decision (RTLVD n ). Similarly, the results of the fifth comparator 611 and the sixth comparator 612 (e.g., relations shown in Equations (19) and (20)) may be combined via a second AND gate 615 to provide an absolute TLV decision (ATLVD n ). The decision may be absolute in the sense that the decision may be generated based on the two comparisons with the absolute number of the block-based image size.

[00101] The two decisions RTLVD n and ATLVD n may in turn be combined via a second OR gate 616 to generate CRTR n a value indicative of TLV characteristics. Combining the information of EDGE n and CRTR n via a third AND gate 617 may yield an image-based TLV decision FfLVD n value. This value may be provided to the pixel- based TLV localizer 305. The ATLVD n decision value may be provided to the local- global TLV detector 304 for further processing.

[00102] Figure 7 illustrates a functional block diagram of an example of a block- based local-global detector for image-based decision. The block-based local-global detector TLV 304 obtains parameters such as forward and backward blocked-based motion vectors (MEF and MEB), the associative errors (e.g., SAD) associated with the vectors, and characteristics LBCO, LBC2 or decision signal ATLVD.

[00103] In the implementation shown in Figure 7, the horizontal components of motion Vectors MEF and MEB may be grouped together via a first adder 701. The inverter (noted using a negative sign) applied for MEB before the first adder 701 may be included to produce a result value that indicates the total global motion. In some implementations, forward and backward MVs are generally opposed in direction. As shown, the backward vector is negatively signed, thus by inverting the backward vector, the first adder 701 result output may represent a global forward and backward MV. In some implementations, the absolute value of each vector may be provided to the first adder 701. The value generated by the first adder 701 may be provided to a motion vector deviation calculator (MVDEV) 708 which provides a metric to measure local MV variation.

[00104] Figure 8 shows a functional block diagram of an example of a motion vector deviation calculator. The motion vector deviation calculators MVDEV 708 and 709 may be implemented similarly to the motion vector deviation calculator 800 shown in Figure 8. The motion vector deviation calculator 800 includes a high pass (HP) Mu filter 801, an absolute value detector 802 and a low pass (LP) Mu filter 803. As shown in Figure 8, the high pass Mu filter and low pass Mu filter are 3x7 filters. In some implementations, the HP Mu filter 801 is a complementary of the LP Mu filter 803. The configuration shown may be used to estimate the local mean motion vector. To reduce compression artifacts (such as the Gibbs effect), the HP Mu filter 801 may be configured to include the impulse response shown in Equation 21.

The HP Mu filter 801 may be coupled with the absolute value detector 802. The absolute value detector 802 may be configured to provide strength of the variation of motion vector's component. The LP Mu filter 803 may be coupled with the absolute value detector 802. The LP Mu filter 803 may be configured to estimate the mean the local strength MV deviation of the provided strength variation. The LP Mu filter 803 may be configured to include the impulse response given as in Equation 22.

[00105] Referring again to Figure 7, a local estimate metric of MV horizontal component deviation may be provided by motion vector deviation calculator 708. To detect an unusual or "bizarre" motion, this value may be obtained by a horizontal comparator 711. The horizontal comparator 711 may be configured to compare the obtained value to a horizontal motion threshold (MTh). Similarly, the same procedure may be implemented for the vertical components of MEF and MEB via a second adder 702, MVDEV 709 and a vertical comparator 712. The vertical comparator 712 may be configured to compare the obtained value to a vertical motion threshold (MTv). The value of the motion thresholds MTh and MTv may be pre-determined or dynamically configured (e.g., based on information associated with the image or device settings). In some implementations, the MTh may be configured to 20. In some implementations, the MTv may be configured to 17. The values generated by the horizontal comparator 711 and the vertical comparator 712 may be combined together via an OR gate 715 which yields a block-based bizarre motion map (BzMM(u )).

[00106] In some implementations, the forward and backward ME errors (e.g., SADF and SADB) may be combined together and used for TLV detection. The combined values (SADF or SADB) may each be compared via a third comparator 703 and a fourth comparator 704, respectively, with an error threshold value (ETr). As shown in Figure 7, the error threshold value is 32. However, in some implementations, the error threshold value may be adaptively determined (e.g., based on the image data). As shown in Figure 7, the same error threshold is used for both the third comparator 703 and the fourth comparator 704. In some implementations, distinct error threshold values may be used for each of the third comparator 703 and the fourth comparator 704.

[00107] The value generated by the third comparator 703 and the value generated by the fourth comparator 704 may be grouped together via OR gate 707. The grouped value generated by the OR gate 707 may be provided to three successive binary add remove filters 710, 713, and 714. As shown, the binary add remove filters are three by three (3x3) filters. In some implementations, one or more of the filters may be, for example, a 2x2 filter, a 4x4 filter, or a 2x4 filter. The filters 710, 713 and 714 may be configured with indicated thresholds for decision consolidation. The value generated by the last consolidation AR filter 714 (SFB( u )) represents the detected high value map of SADF or SADB.

[00108] The local block-based characteristic (LBC2(u )) may be provided to a first frame counter 728. The local block-based characteristic (LBC0( ΰ )) may be provided to a second frame counter 716. The block-based bizarre motion map (BzMM( ΰ )) may be provided to a third frame counter 717. The detected high value map of SADF or SADB (SFB(u )) may be provided to a fourth frame counter 718. The frame counters 728, 716, 717, and 718 may be frame-based counters. One or more of the frame counters 728, 716, 717, and 718 may be configured to also indicate delays.

[00109] The frame counters 728, 716, 717, and 718 may be configured to provide the following frame counts TG2FC n , TG0FC n , TGMFC n and TGSFC n , respectively. Each frame counter 728, 716, 717, and 718, as shown in Figure 7, is coupled with a comparator. The frame counter 728 is coupled with a first count comparator 729. The first count comparator 729 may be configured to compare the TG2FCn frame count with the block-based image size (BISize). As shown in Figure 7, the block-based image size is multiplied by a coefficient (e). The first count comparator 729 may be configured to compare based, at least in part, on Equation 23.

TG2FC n > e * BISize (23)

[00110] The frame counter 716 is coupled with a second count comparator 719. The second count comparator 719 may be configured to compare the TG0FC n frame count with a threshold (T LB co)- In the implementation shown in Figure 7, the second count comparator 719 may compare based in part on Equation (24). The detected pixel number TG2FC of the characteristic-2 map LBC2(u) should be big enough when compared to a fraction of block-based image size.

[00111] In one implementation, the value of coefficient e utilized may be 32/64. The coefficient e may be predetermined, or adaptively determined (e.g., based on information associated with the image or device settings). As configured, LBC2(u) may correspond eventually to flashing of a scene detection (FSD n ) and Equation (23) may indicate that at least half of the scene is affected by TLV. The first comparator 729 provides an output value FSD n . The output value FSD n may be provided to an OR gate 730.

[00112] The frame counter 716 is coupled with a second count comparator 719. The second count comparator 719 may be configured to compare the TG0FC n frame count with a threshold (T LB co)- In the implementation shown in Figure 7, the second count comparator 719 may compare based in part on Equation (24).

TGOFCn > TLBCO (24)

[00113] The threshold (TLBCO) may be provided by a first temporal hysteresis (TH) threshold generator 720. The first temporal hysteresis threshold generator 720 may be configured to generate the threshold (TLBCO) based at least in part on, Equation (25).

TLBCO = TB L BCO + (TSLBCO - TB LB co) * GloTG n _i (25) where TBLBCO is a threshold value for the local block-based characteristic LBC0 , and

TSLBCO is a second threshold value for the local block-based characteristic LBC0, and

GloTG n -i is a value indicating a global temporal gradient of luminance for a previous frame.

[00114] As shown in Figure 7, the global temporal gradient of luminance for a previous frame may be provided by a frame delay 727. The threshold values (TBLBCO and TSLBCO) may be determined based on information similar to that which is included in Table 2 below.

[00115] The frame counter 717 is coupled with a third count comparator 721. The third count comparator 721 may be configured to compare the TGMFC n frame count with a threshold (TMV). In the implementation shown in Figure 7, the third count comparator 721 may compare based in part on Equation (26).

TGMFC n > T MV (26)

[00116] The threshold (TMV) may be provided by a second temporal hysteresis threshold generator 722. The second temporal hysteresis threshold generator 722 may be configured to generate the threshold (TMV) based at least in part on, Equation (27).

TMV = TB MV + (TS M v - TB MV ) * GloTG n _i (27) where TBMV is a threshold value for the motion vector, and

TSMV is a second threshold value for the motion vector, and

GloTG n _i is a value indicating a global temporal gradient of luminance for a previous frame. [00117] As shown in Figure 7, the global temporal gradient of luminance for a previous frame may be provided by a frame delay 727. The threshold values (TB M v and TS MV ) may be determined based on information similar to that which is included in Table 2 below.

[00118] The frame counter 718 is coupled with a fourth comparator 723. The fourth comparator 723 may be configured to compare the TGSFC n frame count with a threshold (T SAD ). In the implementation shown in Figure 7, the fourth count comparator 719 may compare based in part on Equation (28).

TGSFCn > TS AD (28)

[00119] The threshold (T SAD ) may be provided by a third temporal hysteresis threshold generator 724. The third temporal hysteresis threshold generator 724 may be configured to generate the threshold (T SAD ) based at least in part on, Equation (29).

T SAD = TB SAD + (TS SAD - TB SAD ) * GloTG n _! (29) where TB SAD is a threshold value for the sum of absolute differences, and

TS SAD is a second threshold value for the sum of absolute differences, and

GloTG n -i is a value indicating a global temporal gradient of luminance for a previous frame.

[00120] As shown in Figure 7, the global temporal gradient of luminance for a previous frame may be provided by a frame delay 727. The threshold values (TB SAD and TS SAD ) may be determined based on information similar to that which is included in Table 2 below.

[00121] In Equation (23), the threshold may be a single value. Inversely, the thresholds T LBCO , T MV and T SAD in Equations (24), (25) and (26), respectively for the characteristic-0, the motion vector variation and the error SAD, may be expressed as double values: a big threshold value (TB) and a small threshold value (TS), the threshold values selected by a temporal hysteresis TH device.

[00122] The following table summarizes the thresholds TB and TS which may be utilized in the previous equations:

TS (17/64) * BISize (16/64) * BISize (15/64) * BISize

[00123] As shown in Figure 7, A, B and C may represent the respective outputs of the comparators 719, 721, and 723. D may represent the signal ATLVD n provided from the image based TLV detector 303.

[00124] A and B may be combined via an AND gate 725 to generate GloTG n a value indicating the global temporal gradient of luminance for the current frame. The TLV or TG effect may be identified as global to a whole scene if the detected local block based characteristic-0 (LBC0( u )) and the detected MV variation each comprise at least a quarter of the area of a scene. The OR gate 730 may combine GloTG n and the FSD n to generate a value RFC n indicating the frame as a repeat frame for correction.

[00125] The frame-based delayed version of the global decision GloTG n (GloTG n _i) may be used for controlling the above described temporal hysteresis TH.

[00126] The four binary signals A, B, C, and D may be provided to a local temporal gradient of luminance generator 726. The local temporal gradient of luminance generator 726 may be configured to provide the image-based decision of local temporal gradient of luminance (LocTG n ) for correction. The local temporal gradient of luminance generator 726 may be configured to implement the logic given by the following look up table (LUT):

The LUT may be provisioned to the device and stored in memory for use by the local temporal gradient of luminance generator 726. In some implementations the local temporal gradient of luminance generator 726 may calculate the gradient based on for example, information associated with the video data. The signal LocTG n as well as ITLVD n may be provided to the pixel based TLV localizer 305.

[00127] Figure 9 illustrates a functional block diagram of an example of a local- global TLV detector. To locate a pixel-based TLV artifact, the local-global TLV detector 305 may be configured to detect four conditions at a given pixel x: (a) big ME error, (BigWER( x )); (b) big MV Strength (BigMS( x )); (c) luminance variation in flat region (LVF( x )); and (d) big MV Deviation (BigMVD( x )).

[00128] The first condition BigWER( x ) may be detected by adding, via a first adder 901, the forward and backward ME errors, WERMF( x ) and WERMB( x ) respectively. The result of the first adder 901 may be provided to a first comparator 902. The first comparator 902 may be configured to compare the provided result with a threshold. As shown in Figure 9, the first comparator 902 is configured to determine if the provided value is greater than or equal to a threshold (Th). The threshold (Th) may be predetermined (e.g., a value of 20 as shown), or dynamically generated (e.g., based on information associated with the image or device settings). The result of the comparison may be provided to a binary add remove filter 903. The add remove filter 903 may be configured to consolidate the results. In the example shown, the add remove filter is a 3x5 filter. The thresholds for the add remove filter 903 are shown as 5 and 8. However, in other implementations, the thresholds may be different (e.g., 4, 7, 10, 2). The consolidated result will be a value indicating the magnitude of the ME error (BigWER(x)).

[00129] To generate the second condition BigMS( x ), a selector 904 is included. The selector may be configured to select the maximum horizontal and vertical components strengths (MMh( x )) and MMv( x ), respectively) between the forward and backward motion vectors (Fs( x ) and Bs( x ), respectively). The value MMh for a give pixel ( x ) may be generated based at least in part on Equation (30).

MM h = max( | Fsh l , i Bsh l ) (30) where F s is a value indicating the forward horizontal motion vector, and B s is a value indicating the backward horizontal motion vector. [00130] The value MMh( x ) may be provided to a second comparator 905. The second comparator 905 may be configured to compare the value MMh( x ) with a horizontal threshold. The threshold may be predetermined (e.g., 10, 12, 9) or dynamically generated (e.g., based on information associated with the image or device settings). The result of the comparison may be provided to an OR gate 907.

[00131] The value of MMv for a given pixel ( x ) may be generated based at least in part on Equation (31).

MM V = (19/16) max( l Fsv l j Bsv l ) (31) where F sv is a value indicating the forward vertical motion vector, and B sv is a value indicating the backward vertical motion vector.

[00132] The value MMv( x ) may be provided to a third comparator 906. The third comparator 906 may be configured to compare the value MMv( x ) with a vertical threshold. The threshold may be predetermined (e.g., 8, 10, 7) or dynamically generated (e.g., based on information associated with the image or device settings). The result of the comparison may be provided to the OR gate 907.

[00133] The result value generated by the OR gate 907 may be consolidated via appropriate binary add-reduce and add filters 908, 909, and 910 to generate a BigMS( x ) map. As shown, the dimensions of logical filters 908, 909, and 910 are (3x5), (3x5), and (3x3). However, other dimensions for the logical filters may be applied depending on the implementation. Furthermore, the thresholds for the AR filter 908 are given in the figure as 5 and 8.

[00134] The third condition indicative of luminance variation in flat region (LVF( x )) can be generated for a given pixel ( x ). An adder 911 may be configured to generate the difference between the present image luminance for the given pixel (Y n ( x )) and the previous image luminance for the given pixel (Y n -i( x )). The result may be provided to a low-pass filter 912. The LP filter 912 may include the impulse response of Equation (32).

An absolute value circuit 913 may obtain the low-pass filtered result and generate a value indicating the magnitude of the low-passed image difference. The value generated by the absolute value circuit 913 may be filtered via MLP filter 914. The MLP filter 914 may include the impulse response of Equation (33).

[00135] The filtered result may be provided to a comparator 915. The comparator 915 may be configured to compare the filter result value generated by filter 914 to a threshold (TD). For example, the threshold may be preconfigured (e.g., 40) or dynamically generated (e.g., based on information associated with the image or device settings). As shown, the comparator 915 is configured to generate a value indicating whether the filtered value is greater than the threshold. The result of the comparison may be provided to an add-reduce filter 916 for consolidation. The AR filter 916 may be a binary (3x5) AR filter. The thresholds for the AR filter 916 shown in Figure 9 are 5 and 8. In some implementations, the thresholds may be other values (e.g., 2 and 3, 4 and 7, 5 and 9) and/or dynamically generated (e.g., based on information associated with the image or device settings). The filtered result will be a value indicating luminance variation in flat region (LVF(x)) for the pixel of interest ( x ).

[00136] To generate the fourth condition value indicating large motion vector deviation (BigMVD( x )) for a given pixel ( x ), adders 917 and 920 are provided. The adder 917 may be configured to generate the different between the forward horizontal vector for the pixel (F s ( x )) and the backward horizontal vector for the pixel (B s ( x )). The result (MVh( x )) may be provided to a deviation generator 918. The deviation generator 918 may be configured to estimate the deviation of the horizontal motion vector. The deviation estimate may be provided to a comparator 919. The comparator 919 may be configured to compare the deviation estimate to a threshold. As shown, the comparator 919 is configured to determine whether the deviation estimate is greater than or equal to a threshold (MTh). The threshold may be a static value (e.g., 3, 10, or 20) or dynamically generated (e.g., based on information associated with the image or device settings). The result of the comparison may be provided to an OR gate 923.

[00137] The adder 920 may be configured to can be configured to generate the different between the forward vertical vector for the pixel (F sv ( x )). and the backward vertical vector for the pixel (B sv ( x )). The result (MVv( x )) may be provided to a deviation generator 921. The deviation generator 921 may be configured to estimate the deviation of the vertical motion vector. The deviation estimate may be provided to a comparator 922. The comparator 922 may be configured to compare the deviation estimate to a threshold. As shown, the comparator 922 is configured to determine whether the deviation estimate is greater than or equal to a threshold (MTv). The threshold may be a static value (e.g., 3, 10, or 20) or dynamically generated (e.g., based on information associated with the image or device settings). The result of the comparison may be provided to the OR gate 923.

[00138] If either the horizontal or vertical comparisons provide an indication of large motion vector deviation for the pixel, the OR gate 923 may be configured to provide a positive indication for this characteristic as the value for BigMVD( x ).

[00139] The results of the four conditions namely BigWER( x ), BigMS( x ), LVF( x ) and BigMVD( x ) may be combined together via an AND gate 924 to generate a preliminary detection of TLV artifact location. This detection may be somewhat isolated or discontinuous. In some implementations, the detection may be further processed to accommodate this result. As shown in Figure 9, an add-remove filter 925 is provided. The add-remove filter 925 may be a (3x5) AR filter. The thresholds for the AR filter 925 shown in Figure 9 are 5 and 8. In some implementations, the thresholds may be other values (e.g., 2 and 3, 4 and 7, 5 and 9) and/or dynamically generated (e.g., based on information associated with the image or device settings). A subsequent add filter 926 may also be included. The add filter, as shown in Figure 9, is a 3x3 add filter.

[00140] The consolidated result may be validated via an AND gate 927. The validation may include an enable decision signal (ITLVD n ) which may be provided by the image TLV detector 303.

[00141] In some implementations, the validation result (TEGDET( x )) may not be compact. In such implementations, additional processing may be included. For example, two add-inside filters 928 and 929 of respective dimension (3x1) and (1x5) may be used for the hole-filling process. The vertical add-inside (3 x 1) filter 928 may be configured to generate a signal similar to that generated by Equation (34).

The horizontal add-inside (1 x (2K+1)) filter 929 may be configured to generate a signal defined by Equation (35). κ κ

Let LC(x) = U In(c - n,r) and RC(x) = U In(c + n,r)

n=l n=l

(35) if In(x) = 1 or(LC(x) = 1 andRC(x)

Out(x) =

0, if else

In these Equations (34) and (35), In( x ) and Out( x ) denote, respectively, the filter input and output at the pixel x which is given may, in some implementations, be expressed in terms of column and row location values.

[00142] The result (TEGD( x )) may be enabled via AND gate 930 in conjunction with the decision LocTG n provided previously from block-based local-global TLV detector 304. The output of the AND gate 930 may be a value representing the TLVLOC( x ) map which may be used to localize the TLV artifact to be corrected.

[00143] Figure 10 illustrates a functional block diagram of an example of a TLV correction circuit. The TLV correction circuit 208 may include two parts: a first part for TLV local correction 1050 and a second part for TLV global image correction 1075.

[00144] The TLV local correction part 1050 may provide a soft blending between the normal MC interpolated image I mc,a ( x HD ) generated by the HDME/HDMC 206 and a nonlinear interpolated image IN( X HD ) generated by an adder 1012. The nonlinear interpolated image IN( X HD ) may generated based on the two existing input images In( x HTj) and I n _i( x HD ).

[00145] To generate the nonlinear interpolated image, in some implementations, if alpha (a) denotes the relative distance of the interpolated image (I mc,a ( x HD )) in respect to the position of the present image (I n ( x HT>)), instead of an interpolation in linear function of a, the interpolation may be performed in another position (α') of alpha in a nonlinear manner. A non-linear interpolation circuit 1001 may be configured to generate a' by the following equation.

2a 2 , if a < 1/ 2

cc' =

l - 2(l - a) 2 , if a > l/ 2

(36)

Noting that Equation (36) is a non-linear relation in function only of a single variable a. The relationship of Equation (36) is a function of both alpha and a detected reliability provided from the image difference histogram. When the a' value is calculated, the image IN( X HD ), via adder 1010, multiplier 1011, and adder 1012, is generated based on the following expression: IN(x HD ) = I n ( x HD ) + a'(l n _ 1 (x HD ) - I n (x HD )) (37)

To make the blending for local correction, the reduced resolution position map TVLLOC( x ) is up-sampled via an up-sampler 1003. The up-sampler 1003 may be configured to upsample the reduced resolution position map by a factor UxU for HD resolution where U is an up-sampling quantity. The map generated by the up-sampler 1003 may be provided to a MeLP (3x3) filter 1002 to provide the mixing signal m( x HD)- The filter 1002 may include an impulse response such as given by:

The TLV locally corrected image denoted as I mc+ i,a( x HD) may be generated as an output resulting from an adder 1013, a multiplier 1014, and an adder 1015 as illustrated by Figure 10. The TLV locally corrected image may be generated based on Equation (39). I,« + HICXHD) * (INCXHD) - I MC >a D )) (39)

[00146] The second part 1075 may be configured to provide temporal hard mixing between the first corrected image I m c+i,a( x HT>) generated by the adder 1015 and a repeated image IR( x HD ). The repeated image IR( x HD ) value may be generated based on the present or the previous input images I n ( x H D) and I n-1 ( x H D)- The images may be provided to a first adder 1020. The first adder 1020 may be configured to generate a difference between the two images. This difference may be provided to a multiplier 1021. The multiplier 1021 may be configured to multiply the difference by a binary selection control signal a".

[00147] The binary control signal may be generated by a comparator 1004. The comparator 1004 may generate the binary control signal based on the interpolation plan alpha (a). For example, as shown in Figure 10, alpha is compared to a fixed value. In the implementation shown, a determination is made as to whether alpha is less than a middle value (such as ½). As such, the binary selection control signal a" may be set equal to 1 if a is greater than ½ and to 0 otherwise. One of the equations shown in Equation (40) may be implemented to generate the repeated image IR( X HD)- IR( HD ) = I n ( x HD ) + a ,, (l n _ 1 ( x HD ) - I n ( x HD )), or

For example, in Figure 10, the first adder 1020 coupled with the multiplier 1021 coupled with a second adder 1922 may generate the repeated image IR( x HD ).

[00148] Similarly, an adder 1023, a multiplier 1024, and an adder 1025 as illustrated in Figure 10, may be configured to generate the TLV globally corrected image denoted as I m c+2,a( x HT>)- An example configuration may be expressed as shown in Equation (41).

[00149] Accordingly, the TLV globally corrected image may be provided for presentation or further processing.

[00150] Figure 11 illustrates a process flow diagram for an example of a method of correcting temporal luminance variation artifacts during frame rate conversion. The method of Figure 11 may be implemented in whole or in part by one or more of the devices described herein.

[00151] The process begins at block 1102 where TLV is detected between a first image and a second image. The detection may be based on edge information for the images, TLV characteristics of the images, and motion estimation between the images. At block 1104, the location of TLV artifacts in an interpolated image between the first image and the second image is determined. At block 1106, the interpolated image may be modified based on the determination. For example, the modification may include local and/or global TLV correction as described above.

[00152] Figure 12 illustrates a functional block diagram of an example apparatus for correcting the temporal luminance variation (TLV) artifacts during frame rate conversion. Those skilled in the art will appreciate that a TLV correction apparatus may have more components than the simplified apparatus 1200 shown in Figure 12. The apparatus 1200 shown includes only those components useful for describing some prominent features of implementations within the scope of the claims. The apparatus 1200 includes a TLV detector 1202, a TLV localizer 1204, and a TLV correcting circuit 1206.

[00153] The TLV detector 1202 may be configured to detect TLV between a first image and a second image based on edge information, TLV characteristics, and motion estimation. The TLV detector 1202 may include one or more of a comparator, a filter, a processor, a logic gate, a memory, and an arithmetic unit. In some implementations, the means for TLV detection may include the TLV detector 1206.

[00154] The TLV localizer 1204 may be configured to determine the location of TLV artifacts in an interpolated image between the first image and the second image. The TLV localizer 1204 may include one or more of a comparator, a filter, a processor, a logic gate, a memory, and an arithmetic unit. In some implementations, the means for TLV localizing may include the TLV localizer 1204.

[00155] The TLV correcting circuit 1206 may be configured to modify to modify the interpolated image based on the TLV determination. The TLV correcting circuit 1206 may include one or more of a comparator, a filter, a processor, a logic gate, a memory, and an arithmetic unit. In some implementations, the means for modifying an image may include the TLV correcting circuit 1206.

[00156] As used herein, the terms "determine" or "determining" encompass a wide variety of actions. For example, "determining" may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, "determining" may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, "determining" may include resolving, selecting, choosing, establishing and the like.

[00157] As used herein, the terms "provide" or "providing" encompass a wide variety of actions. For example, "providing" may include storing a value in a location for subsequent retrieval, transmitting a value directly to the recipient, transmitting or storing a reference to a value, and the like. "Providing" may also include encoding, decoding, encrypting, decrypting, validating, verifying, and the like.

[00158] As used herein, a phrase referring to "at least one of a list of items refers to any combination of those items, including single members. As an example, "at least one of: a, b, or c" is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.

[00159] The various operations of methods described above may be performed by any suitable means capable of performing the operations, such as various hardware and/or software component(s), circuits, and/or module(s). Generally, any operations illustrated in the Figures may be performed by corresponding functional means capable of performing the operations.

[00160] The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.

[00161] In one or more aspects, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD- ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects computer readable medium may comprise non-transitory computer readable medium (e.g., tangible media). In addition, in some aspects computer readable medium may comprise transitory computer readable medium (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.

[00162] The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.

[00163] The functions described may be implemented in hardware, software, firmware or any combination thereof. If implemented in software, the functions may be stored as one or more instructions on a computer-readable medium. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers.

[00164] Thus, certain aspects may comprise a computer program product for performing the operations presented herein. For example, such a computer program product may comprise a computer readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described herein. For certain aspects, the computer program product may include packaging material.

[00165] Software or instructions may also be transmitted over a transmission medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of transmission medium.

[00166] Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described herein can be downloaded and/or otherwise obtained by an encoding device and/or decoding device as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described herein. Alternatively, various methods described herein can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described herein to a device can be utilized.

[00167] It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.

[00168] While the foregoing is directed to aspects of the present disclosure, other and further aspects of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.