Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN IMAGE PROCESSING METHOD AND A CORRESPONDING DEVICE
Document Type and Number:
WIPO Patent Application WO/2015/144566
Kind Code:
A1
Abstract:
An image processing method for estimating a color transform function between first image data and second image data is disclosed. The color transform function is composed of at least a first color mapping function and a second color mapping function. Estimating the color transform function comprises for at least an iteration k, k being an integer: a) estimating (S10) the second color mapping function and an inverse of the second color mapping function from the second image data and from the first image data transformed by the first color mapping function estimated at iteration k-1; b) estimating (S12) the first color mapping function from the first image data and from the second image data transformed by the inverse of the of the second color mapping function estimated at step a).

Inventors:
BORDES PHILIPPE (FR)
LASSERRE SÉBASTIEN (FR)
ANDRIVON PIERRE (FR)
Application Number:
PCT/EP2015/055834
Publication Date:
October 01, 2015
Filing Date:
March 19, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THOMSON LICENSING (FR)
International Classes:
H04N1/60; H04N9/67
Foreign References:
EP1729257A22006-12-06
US20060082843A12006-04-20
Attorney, Agent or Firm:
LORETTE, Anne et al. (Issy-Les-Moulineaux, FR)
Download PDF:
Claims:
Claims

1 . An image processing method for estimating a color transform function between first image data and second image data, said color transform function being composed of at least a first color mapping function and a second color mapping function, wherein estimating said color transform function comprising for at least an iteration k, k being an integer:

a) estimating (S10) said second color mapping function and an inverse of said second color mapping function from said second image data and from said first image data transformed by said first color mapping function estimated at iteration k-1 ;

b) estimating (S12) said first color mapping function from said first image data and from said second image data transformed by said inverse of said second color mapping function estimated at step a).

2. The method of claim 1 , wherein the color transform function is composed of a 1 D Look-Up Table, a Matrix and a 1 D LUT.

3. A computer program product comprising program code instructions to execute of the steps of the image processing method according to claim 1 or 2 when this program is executed on a computer.

4. A processor readable medium having stored therein instructions for causing a processor to perform at least the steps of the image processing method according to claim 1 or 2.

5. An image processing device comprising at least one processor configured to estimate a color transform function between first image data and second image data, said color transform function being composed of at least a first color mapping function and a second color mapping function, estimating said color transform function comprising for at least an iteration k, k being an integer:

a) estimating said second color mapping function and an inverse of said second color mapping function from said second image data and from said first image data transformed by said first color mapping function estimated at iteration k-1 ;

b) estimating said first color mapping function from said first image data and from said second image data transformed by said inverse of said second color mapping function estimated at step a).

6. The device of claim 5, wherein the color transform function is composed of a 1 D Look-Up Table, a Matrix and a 1 D LUT.

Description:
AN IMAGE PROCESSING METHOD AND A CORRESPONDING DEVICE

1 . FIELD OF THE INVENTION

The invention relates to an image processing method for estimating a color transform function also known as color mapping function between first image data and second image data, the color transform function being composed of at least a first color mapping function and a second color mapping function.

2. BACKGROUND OF THE INVENTION

The color images and videos are usually captured using tri-chromatic cameras into RGB raw data composed of 3 images (Red, Green, Blue). The RGB signal values depend on the tri-chromatic characteristics (color primaries) of the sensor. Given the particular Human Visual System properties, these data are transformed into YUV signal to facilitate the encoding, where Y is the main component and UV (chromaticity) are secondary components the human eye is less sensitive to. Several YUV formats are used in the industry. For example, ITU-R Rec.601 defines Studio encoding parameters of Standard Digital Television for standard 4:3 and wide- screen 16:9 aspect ratios. ITU-R Rec.709 defines parameters for High Definition Television (HDTV) and ITU-R BT.2020 defines parameter values for Ultra-High Definition Television systems (UHDTV).

All YUV formats are characterized by a Gamma and Color primaries parameters that allow to define the RGB-to-YUV and YUV-to-RGB conversions. This transformation can be applied in two ways: Constant Luminance (CL) or Non-Constant Luminance (NCL), as depicted with thin lines on Figure 1. A display then transforms the RsGsB s signal (standardized color primaries) into an RDispiayG D i S piayBDispiay signal corresponding to the color primaries of the display as depicted with bold lines on Figure 1 .

In order to support non-standardized YUV signal representations, or to support conversions between two (standardized) YUV formats (YUV-i-to- YUV 2 ) but color graded differently, and to preserve artistic intent, it is known to explicitly signal the YUV to-RGB 2 or the YUV to-YUV 2 (color mapping) transform to the display so that the display is able to apply the appropriate signal conversion.

There is thus a need of a method for estimating a color transform (also known as color mapping function, CMF) given the input and output video instances (i.e. Yi Ui Vi and Υ 2 υ 2 ν 2 as depicted on figure 2 or Y 1 Ui Vi and R 2 G 2 B 2 ).

1. BRIEF SUMMARY OF THE INVENTION

An image processing method for estimating a color transform function between first image data and second image data is disclosed . The color transform function is composed of at least a first color function and a second color function. Estimating the color transform function comprises for at least an iteration k, k being an integer:

a) estimating the second color function and an inverse of the second color function from the second image data and from the first image data transformed by the first color function estimated at iteration k-1 ;

b) estimating the first color function from the first image data and from the second image data transformed by the inverse of the of the second color function estimated at step a).

According to a specific characteristic of the invention, the color transform function is composed of a 1 D Look-Up Table, a Matrix and a 1 D LUT.

The image processing method allows for determining the parameters of a complex color transform function composed of a combination of several multidimensional functions. It allows achieving robust model parameters estimation.

A computer program product is also disclosed that comprises program code instructions to execute of the steps of the image processing method according to claim 1 or 2 when this program is executed on a computer.

A processor readable medium is disclosed that has stored therein instructions for causing a processor to perform at least the steps of the image processing method.

An image processing device comprising at least one processor configured to estimate a color transform function between first image data and second image data is disclosed. The color transform function is composed of at least a first color function and a second color function. Estimating the color transform function comprises for at least an iteration k, k being an integer: a) estimating the second color function and an inverse of the second

color function from the second image data and from the first image data transformed by the first color function estimated at iteration k-1 ; b) estimating the first color function from the first image data and from the second image data transformed by the inverse of the of the second color function estimated at step a). 2. BRIEF DESCRIPTION OF THE DRAWINGS

In the drawings, an embodiment of the present invention is illustrated. It shows:

- Figure 1 depicts YUV to RGB conversion using Constant Luminance or Non-Constant Luminance according to the state of the art;

- Figures 2 and 3 depict color mapping transformation models according to the state of the art;

- Figure 4 represents a piece-wise linear function;

- Figure 5 represents a flowchart of an image processing method for estimating a color transform between first image data and second image data according to a specific and non-limitative embodiment of the invention; and

- Figure 6 represents an exemplary architecture of an image processing device according to a specific and non-limitative embodiment of the invention.

3. DETAILED DESCRIPTION OF THE INVENTION

In the following the expressions "color transform" and "color mapping model" and "color mapping function" are used interchangeably. In order to avoid developing new (hardware) capability and in order to re-use existing display systems, the Color Mapping Function (CMF) is often modeled as a combination of one dimensional non-linear mapping functions (possibly implemented via a 1 dimensional piecewise linear function with a 1 D LUT) and a 3x3 matrix M, as depicted on figure 2. One typical use case is a video content that has been color graded twice: once (11 ) with a standardized format stdl and a second times (I2) with a standardized format std2. In the following 11 and I2 represents either two videos or two images. As depicted on figure 3, color graded with stdl is distributed (broadcasting, broadband, DVD, blu- Ray...) to an end display device 30 that only supports std2 format. Consequently, on a transmitter side, a CMF is estimated by a module 10 between the first color graded version \ and the second color graded version l 2 . Color mapping metadata representative of the estimated CMF and the first color graded version \ < i are made available to the end display device 30 on a receiver side. The first color graded version may be encoded in an HEVC or an AVC bitstream while the color mapping metadata are for example encoded as metadata in a header of the bitstream or out-of-band. From the first color graded version of the video content \^ or from an estimation of it (e.g. its decoded version ) and from the color mapping metadata, the end display device 30 is able to generate another version ΐ 2 that approximates the second color graded version l 2 . Specifically, the end display device 30 applies the CMF on the first color graded version \ < i or on its decoded version \ < i to obtain the approximated second color graded version \ 2 . In such a case, there is no need to transmit both the first and second color graded versions to the end user display. This solution makes it possible to save bandwidth. There is thus a need to determine on the transmitter side a CMF from the first and second color graded versions. More generally, there is a need to determine on the transmitter side the CMF from first and second image data, namely 11 and I2.

Figure 4 represents a flowchart of an image processing method for estimating a color mapping function CMF between first image data 11 and second image data I2 according to a specific and non-limitative embodiment of the invention. The color mapping function CMF is composed of at least a first color mapping function F1 and of a second color mapping function F2: CMF=F1 oF2, where o is the composition operator. The functions F1 and F2 are initialized at an iteration k=0, for example by an identity function, i.e. Fi,o(x)=x and F 2 ,o(x)=x-

In step S10, at an iteration k, the second color mapping function F 2, k and an inverse of the second color mapping function F "1 2 k are estimated, from the second image data 12 and from the first image data transformed by the first color mapping function estimated at iteration k-1 , i.e. F k- (l 1 ).

In step S1 2, the first color mapping function F-i ,k is estimated from the first image data 11 and from the second image data transformed by the inverse of the of the second color mapping function estimated at step S1 0, i.e. from F "1 2, k (I2). At each step, only one function is determined. The whole process can be iterated itself several times until a stop criteria is reached. The stop criteria may be a number of iterations. In this case, the steps are iterated until the number of iterations is equal to K. K is an integer, e.g. K=1 0. In a variant, the steps are iterated until the absolute variation of the color mapping function parameters (e.g.. between two (or more than two) consecutive iterations) is below a threshold value. The color mapping functions F1 and F2 may be 3x3 matrices or 1 D Look-Up Tables. Figure 5 represents a color mapping function according to a specific and non-limiting embodiment of the invention. The color mapping function is composed of 3 color mapping functions F1 , M and F2. Specifically, F1 and F2 are sets of three mapping functions f i, where Z,e {Y-i , U-i , V-i , Y 2 , U 2 , V 2 }. Said otherwise the color mapping function is composed of 7 color mapping functions (f Y i , fui , fv-i , M, f Y , fui , fvi)- The functions f Z ; are 1 dimensional piecewise linear (e.g. 1 D LUT), and the function M is 3x3 linear matrix. This specific color transform is based on the combination of existing functions implemented in many screen, displays and TV. They could be used to implement any kind of color transform, e.g. in the case where the color grading is color space dependent.

The functions fez- S→ Y, from the set E1 to the set E2 and from the set E3 to the set E4, are described and implemented using 1 D LUTs, corresponding to piece-wise linear functions as depicted on Figure 6. The function M, from the set E2 to the set E3, is described and implemented using a 3x3 matrix: Figure 7 represents a flowchart of an image processing method for estimating a color transform between first image data and second image data according to a specific and non-limitative embodiment of the invention. In this embodiment the color mapping function CMF is composed of the color mapping functions depicted on Figure 5. Determining the global color transform (f Y , fui , fv-i , M, f Y , fui , fvi ) is not straightforward because the values of the samples (Y, U, V) in the sets E2 and E3 are not available. Consequently, the method solves each problem separately with an iterative approach. More precisely, at each step one function only is estimated. On this figure M is estimated first. However, the invention is independent of the order in which the functions are estimated. Exemplarily, f Y2 , fu 2 , fv 2 can be estimated first followed by M and fvi , fui , fvi -

In an initialization step S20, the color mapping function (f Y , fui , fvi , M, f Y2 , fu 2 , 2) is initialized with a-priori values. Exemplarily, the functions f∑i are initialized with linear monotonous functions and the matrix M is initialized with the identity matrix.

In step S22, at an iteration k, for each sample (Y-i ,Ui,V-i) from the set E1 , one can determine its image in E2 using current values of the color mapping function F 1 jk -i■ In addition, for each sample (Y 2 ,U 2 ,V 2 ) from the set E4, one can determine its image in E3 using current values of the inverse color mapping function F "1 2 k- . The determination of M k can be done using LSM method, solving 3 systems of 3 equations.

Yi = m i (X 0 , X 1 , X 2 ) = g L0 . X 0 + g L1 . X 1 + g L2 . X 2 (4)

For a set of samples ((Χ Ο ,ΧΙ,ΧΣ), YI), the quadratic error is Err, = (Υ,- ,(Xo,Xi,X2)) 2 - The LSM method consists in solving a system of 9 equations built from the partial derivative of mi() respectively to g,-j with i=0,1 ,2 and j=0,1 ,2. The inverse matrix M "1 k at iteration k can be determined directly from M k in the case where M k is invertible or M "1 k can be determined in the same way as M k using LSM method.

In step S24, at an iteration k, for each sample (Y 2 ,U 2 ,V 2 ) from the set E4, one can determine its image in E2 using current values of the color mapping function, i.e. F "1 2 k- and M "1 k determined at step S22. Next, (f Y i , fui , fvi ) at iteration k, i.e. F-i , k , can be determined for example using the method disclosed in the paper from Cantoni entitled "Optimal Curve Fitting With Piecewise Linear Functions," IEEE Transactions on Computers, Vol. C-20, No1 , January 1971 which is hereby incorporated by reference.

In step S26, at an iteration k, for each sample (Yi ,Ui,V-i) from the set E1 , one can determine its image in E3 using current values of the color mapping function, i.e. F k determined at step S24 and M k determined at step S22. Next, (fv2, fu2, fv2) at iteration k, i.e. F 2 , k , can be determined for example using the method of Cantoni. The inverse functions (f Y2 "1 , 1 , fv2 ~1 ), i-θ- F "1 2 ,k, are determined at iteration k using for example the same method (LSM) as for estimating (f Y2 , fu2, fv2)- In the case where (f Y2 , fu2, fv2) are invertible (f Y2 "1 , 1 , fv2 _1 ), i-θ- F "1 2 ki are determined directly from (f Y2 , f U2 , fv2) estimated at iteration k. However, these functions may not be invertible if they are not strictly monotonous. Even though they are strictly monotonous, one can face numerical precision issue while implementing such inverse function from piecewise linear representation.

The iterative principle comprises determining step by step each function, with possible iteration on all or part of the process, as depicted for example on Figure 7. In addition, it comprises estimating a dual model composed of the inverse functions (f Y2 "1 , 1 , 1 ) determined at iteration k using for example the same method (LSM) as for estimating (f Y2 , fu2, fv2)- In the same way, a dual model composed of the inverse of M is also estimated. Alternatively, if M is invertible, an algorithm for inverting matrices can be directly used. These dual models are used to determine the other functions ((f Y i , fui , fvi) or M). In that way, one can determine iteratively each function. At each step, one determine one function only. The whole process can be iterated itself several times up until a stop criteria is reached. The stop criteria is a number of iterations. In this case, the steps are iterated until the number of iterations is equal to K. K is an integer, e.g. K=10. In a variant, the steps are iterated until the absolute variation of the function parameters (e.g.. the values L(X) or the matrix coefficients) values between two (or more than two) consecutive iterations is below a threshold value.

The invention disclosed with 3 functions can be applied with 2 functions, e.g. 1 D LUT and a matrix 3x3 and also with 3 or more than 3 functions, e.g. 1 D LUT followed by a 3x3 matrix followed by another 1 D LUT or a matrix followed by a 1 D LUT followed by a matrix followed by another 1 D LUT. Other functions than 1 D LUT and matrix can be used such as for example polynomial transforms: (X,Y,Z)=f(x,y,z,x 2 ,y 2 ,z 2 ).

The estimation of the functions f z ,: S→ Y is detailed below with respect to figure 6. For a given point with abscissa S e [Xi;X i+ i], the image of S under f Xi is Vsuch that:

Y = f Zi (S) = Li + (L i+1 - Li)*(S- Xi)/(X i+1 - Xi)

One has to find the optimal values for the L(X,) that minimizes the quadratic errors over all pixels k : L = argmin L .(∑Erri) with Err t =∑s ke[x .. x . j (Yk - f(Sk)) 2 for the set of (S k ,Y k ) sample values, with S k e [Χ,;Χί + ι] of YU ^ and Y k e YUV 2 , for each interval [Xi;Xi + i]i=o,..N-i- The Least Square Minimization (LSM) method comprises solving a set of equations of partial derivative of Err i.e. Equivalent to (N+1 equations):

Zo = ∑Ske [X0;X1] (^fc 25 fc + 1)L Q +∑ ske [ X0 . X1 ](— 5 + 5 fc ) L-L (1)

Zi = ∑Ske[Xi-l;Xi]( $k + ^k)^i-l +

[∑Ske[Xi;Xi+l](^fc 25 fc + l)+∑sk e [Xi-1 ; Xi]0¾ ^i] +∑Sk e [Xi ; Xi+1]( $k +

(2)

N = ∑Ske[XN-l;XN]( + ^k)^N-l +∑Sk e [XN-1 ; XN] (^fc) (3)

Equivalent to (N+1 equations):

= «0,0 X + «0,1 x (1 )

%i = a i,i-l x ^t-l + «i,i x a i,i+l x ^t+l (2)

Zy = a NN -i x £JV-I + «JV,JV X ½ (3)

Equivalent to (vectors (N+1)x1, matrix (N+1)x(N+1))

N is an integer. The 1 D LUT comprises (N+1 ) elements: L 0 to L N .

Figure 8 represents an exemplary architecture of an image processing device 100 configured to estimate a color transform function from first image data and second image data according to a specific and non-limitative embodiment of the invention. The processing device 100 comprises one or more processor(s) 1 10 , which is(are), for example, a CPU, a GPU and/or a DSP (English acronym of Digital Signal Processor), along with internal memory 120 (e.g. RAM, ROM, EPROM). The processing device 100 comprises one or several Input/Output interface(s) 130 adapted to display output information and/or allow a user to enter commands and/or data (e.g. a keyboard, a mouse, a touchpad, a webcam); and a power source 140 which may be external to the processing device 100. The device 100 may also comprise network interface(s) (not shown). The first image data and second image data may be obtained from a source. According to different embodiments of the invention, the source belongs to a set comprising:

- a local memory, e.g. a video memory, a RAM, a flash memory, a hard disk ;

- a storage interface, e.g. an interface with a mass storage, a ROM, an optical disc or a magnetic support;

- a communication interface, e.g. a wireline interface (for example a bus interface, a wide area network interface, a local area network interface) or a wireless interface (such as a IEEE 802.1 1 interface or a Bluetooth interface); and

- an image capturing circuit (e.g. a sensor such as, for example, a CCD (or Charge-Coupled Device) or CMOS (or Complementary Metal-Oxide-Semiconductor)).

According to different embodiments of the invention, the color transform may be sent to a destination. As an example, the color transform is stored in a remote or in a local memory, e.g. a video memory or a RAM, a hard disk. In a variant, the color transform is sent to a storage interface, e.g. an interface with a mass storage, a ROM, a flash memory, an optical disc or a magnetic support and/or transmitted over a communication interface, e.g. an interface to a point to point link, a communication bus, a point to multipoint link or a broadcast network.

According to an exemplary and non-limitative embodiment of the invention, the processing device 100 further comprises a computer program stored in the memory 120. The computer program comprises instructions which, when executed by the processing device 100, in particular by the processor 1 10, make the processing device 100 carry out the method described with reference to figure 4 or 7. According to a variant, the computer program is stored externally to the processing device 100 on a non-transitory digital data support, e.g. on an external storage medium such as a HDD, CD-ROM, DVD, a read-only and/or DVD drive and/or a DVD Read/Write drive, all known in the art. The processing device 100 thus comprises an interface to read the computer program. Further, the processing device 100 could access one or more Universal Serial Bus (USB)-type storage devices (e.g., "memory sticks.") through corresponding USB ports (not shown).

According to exemplary and non-limitative embodiments, the processing device 100 is a device, which belongs to a set comprising:

- a mobile device ;

- a communication device ;

- a game device ;

- a tablet (or tablet computer) ;

- a laptop ;

- a still image camera;

- a video camera ;

- an encoding chip;

- a still image server ; and

- a video server (e.g. a broadcast server, a video-on-demand server or a web server). The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.

Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.

Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette ("CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory ("RAM"), or a read-only memory ("ROM"). The instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.

As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor- readable medium.

A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.