Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ADJUSTING PARAMETERS OF LIGHT EFFECTS SPECIFIED IN A LIGHT SCRIPT
Document Type and Number:
WIPO Patent Application WO/2019/233800
Kind Code:
A1
Abstract:
A system (1) is configured to obtain a first light script (13) for at least part of a first version (11) of a video item, obtain first image information extracted from the first version of the video item, and obtain second image information extracted from at least part of a second version (17) of the video item. The system is further configured to determine image differences by comparing the first image information with the second image information and determine a second light script (19) for at least part of the second version of the video item by adjusting parameters of a plurality of light effects specified in the first light script based on the determined image differences.

Inventors:
RYCROFT SIMON (NL)
THURSFIELD PAUL (NL)
ALIAKSEYEU DZMITRY (NL)
MASON JONATHAN (NL)
Application Number:
PCT/EP2019/063629
Publication Date:
December 12, 2019
Filing Date:
May 27, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIGNIFY HOLDING BV (NL)
International Classes:
H05B37/02; H04N9/64; H04N9/73
Domestic Patent References:
WO2007036890A22007-04-05
Foreign References:
US20100061405A12010-03-11
EP3331325A12018-06-06
US8904469B22014-12-02
US20100061405A12010-03-11
Attorney, Agent or Firm:
VERWEIJ, Petronella, Danielle et al. (NL)
Download PDF:
Claims:
CLAIMS:

1. A system (1) comprising at least one processor (3) configured to:

- obtain a first light script (13) for at least part of a first version (11) of a video item, the first light script specifying a plurality of light effects that match the content of the video item,

- obtain first image information extracted from said first version (11) of said video item,

- obtain second image information extracted from at least part of a second version (17) of said video item,

- determine image differences by comparing said first image information with said second image information, wherein said image differences comprise color differences, and

- determine a second light script (19) based on said determined image differences for at least part of said second version (17) of said video item by adjusting color parameters of the plurality of light effects specified in said first light script (11) based on said determined image differences.

2. A system (1) as claimed in claim 1, wherein said at least one processor (3) is configured to determine said color differences for each of said plurality of light effects by:

- determining one or more pixels in a frame of said first version of said video item having a first color value set (35), said first color value set (35) being similar or identical to a color value set specified in relation to said light effect,

- determining a second color value set (45) from one or more color value sets of corresponding one or more pixels in a corresponding frame of said second version of said video item, and

- determining a difference between said first color value set (35) and said second color value set (45).

3. A system (1) as claimed in claim 1, wherein said at least one processor (3) is configured to determine said color differences for each of said plurality of light effects by: - determining a first color value set from a plurality of color value sets of a plurality of pixels in a frame of said first version of said video item,

- determining a second color value set from a plurality of color value sets of a plurality of pixels in a corresponding frame of said second version of said video item, and

- determining a difference between said first color value set and said second color value set.

4. A system (1) as claimed in claim 1, wherein said at least one processor (3) is configured to determine said color differences for each of said plurality of light effects and adjust a color parameter of each of said plurality of light effects by:

- determining for each color value specified in said color parameter of said light effect a difference (65) between a quantity (61) of pixels having said color value in a frame of said first version of said video item and a quantity (63) of pixels having said color value in a corresponding frame of said second version of said video item,

- determining an adjustment value for each of said color values based on said determined quantity difference, and

- adjusting said color parameter of said light effect based on said determined adjustment values.

5. A system (1) as claimed in claim 1, wherein said at least one processor (3) is configured to:

- generate a subset of said plurality of light effects for said first light script automatically from said first version of said video item using content analysis,

- generate a corresponding plurality of light effects for said second light script automatically from said second version of said video item using said content analysis,

- determine color differences between color parameters of said subset of said plurality of light effects and color parameters of said corresponding plurality of light effects, and

- adjust parameters of manually specified ones of said plurality of light effects specified in said first light script based on said determined color differences.

6 A system (1) as claimed in any one of claims 1 to 5, wherein said at least one processor (3) is configured to determine at least some of said color differences by comparing color information determined from a plurality of frames of said first version of said video item with color information determined from a corresponding plurality of frames of said second version of said video item.

7. A system (1) as claimed in any one of claims 1 to 6, wherein said at least one processor (3) is configured to determine a region (39) of said video item for at least one of said light effects from said first light script and determine an adjustment of at least one of said parameters of said at least one light effect based on color differences in only said determined region of said video item.

8. A system (1) as claimed in any one of claims 1 to 7, wherein said at least one processor (3) is configured to disable or omit one or more of said plurality of light effects in said second light script if a determined adjustment of a color parameter of said one or more light effects would result in an undesirable color value or an undesirable set of color values, wherein the undesirable color value or the undesirable set of color values are specified by an author of the first light script or wherein the undesirable color value or an undesirable set of color values are user-configurable.

9. A system (1) as claimed in any one of the preceding claims, wherein said image differences comprise differences in image resolution, differences in shot duration and/or differences in the number of represented dimensions.

10. A system (1) as claimed in any one of the preceding claims, wherein said at least one processor (3) is configured to obtain said first image information directly by extracting said first image information from said first version of said video item.

11. A system (1) as claimed in any one of the preceding claims, wherein said first light script specifies origin information indicating for each of said plurality of light effects how one or more parameters of said light effect were determined from said first version of said video item and said at least one processor (3) is configured to obtain said first image information from said light effect parameters and said origin information.

12. A system (1) as claimed in any one of the preceding claims, wherein said at least one processor (3) is configured to identify light effects in said first light script whose parameters should be maintained and include said identified light effects in said second light script without adjustments to their parameters.

13. A method of determining a light script, comprising:

- obtaining (101) a first light script for at least part of a first version of a video item, the first light script specifying a plurality of light effects that match the content of the video item;

- obtaining (103) first image information extracted from said first version of said video item;

- obtaining (105) second image information extracted from at least part of a second version of said video item;

- determining (107) image differences by comparing said first image information with said second image information, wherein said image differences comprise color differences; and

- determining (109) a second light script based on said determined image differences for at least part of said second version of said video item by adjusting color parameters of the plurality of light effects specified in said first light script based on said determined image differences. 14. A computer program or suite of computer programs comprising at least one software code portion or a computer program product storing at least one software code portion, the software code portion, when run on a computer system, being configured for enabling the method of claim 13 to be performed.

Description:
ADJUSTING PARAMETERS OF LIGHT EFFECTS SPECIFIED IN A LIGHT SCRIPT

FIELD OF THE INVENTION

The invention relates to an electronic device for determining a light script.

The invention further relates to a method of determining a light script.

The invention also relates to a computer program product enabling a computer system to perform such a method.

BACKGROUND OF THE INVENTION

Home entertainment lighting systems have proven to add a great deal to the experience of games, movies and music. For example, the use of light effects that match with and support content can significantly enhance the content. Such light effects can be specified in a light script.

US 2010/061405 Al discloses a method of operating a system of devices comprising receiving content to be rendered, receiving augmentation data for controlling one or more effects devices, rendering the content at a first device, controlling the effects devices according to the augmentation data in synchronization with the content rendering, monitoring the synchronization of the augmentation data, and adjusting the controlling of one or more effects devices if the synchronization of the augmentation data is determined to be in doubt.

WO 2007/036890A2 discloses a method of fully automatically creating a light script from a video item. However, the best experience, i.e. high-quality light scrips, can be realized by involving a human in the creation of a light script, e.g. by employing fully manual light script creation or manual editing of an automatically generated light script.

Color grading is the process of altering and enhancing the color of a movie that happens during a movie post-production. Color grading encompasses both color correction and the generation of artistic color effects. Sometimes, different palettes are created depending on the version of the movie (e.g. 2D or 3D). Light scripts can be created either before color grading or after. If they are created before the grading using the original color palette, then this can create discrepancies between the movie content and the light script. In order to realize the best experience, these light scripts would then still need to be re- adjusted manually after the color grading to match the color graded content. Since color grading is typically applied toward the end of the post production process, if the light scripts are manually created or manually adjusted after color grading, then the issue is that by this stage the people necessary for the creation of light scripts may already have moved on to new projects. Furthermore, multiple scripts would need to be at least partly manually created, one for each color grading.

SUMMARY OF THE INVENTION

It is a first object of the invention to provide a system for determining a light script, which can be used to determine a high-quality light script for a video item with limited human involvement.

It is a second object of the invention to provide a method of determining a light script, which can be used to determine a high-quality light script for a video item with limited human involvement.

In a first aspect of the invention, the system comprises at least one processor configured to obtain a first light script for at least part of a first version of a video item, obtain first image information extracted from said first version of said video item, obtain second image information extracted from at least part of a second version of said video item, determine image differences by comparing said first image information with said second image information, and determine a second light script for at least part of said second version of said video item by adjusting parameters of a plurality of light effects specified in said first light script based on said determined image differences. A light script determines when to create which light effect (e.g. which chromaticity and brightness settings on which light source) and is used to control the light sources.

The inventors have recognized that it is best to determine a first light script at least partly manually (thereby ensuring a high quality) before processing like color grading takes place, because at this stage the people necessary for the creation of light scripts have not moved on to new projects yet, but to fully automatically determine one or more further light scripts after the processing has taken place in order to limit human involvement in the light script creation process. In other words, the second light script is determined fully automatically from the first light script by comparing image information extracted from the pre-processed version (also referred to as“master” version) of the video item with image information extracted from a post-processed version of the video item. The invention may be used in relation to processes of film post production other than color grading. For example, a production company (or a third party that is expert in this area, as is often seen with audio) might create a first light script for a first version of a video item in which compositing of CGI elements is not final.

Said image differences may comprise color differences and said at least one processor may be configured to adjust said parameters of said plurality of light effects by adjusting color parameters based on said determined color differences. Color differences may include chromaticity differences and/or brightness differences, for example. Alternatively or additionally, said image differences may comprise differences in image resolution (like differences in aspect ratio, e.g. due to 21 :9 to 16:9 conversion), differences in shot duration (e.g. due to differences in frame rate) and/or differences in the number of represented dimensions (e.g. due to 2D to 3D or 3D to 2D conversion). The inventors have recognized that other types of video processing may also create discrepancies between the video content and the light script.

Said at least one processor may be configured to determine said color differences for each of said plurality of light effects by determining one or more pixels in a frame of said first version of said video item having a first color value set, said first color value set being similar or identical to a color value set specified in relation to said light effect, determining a second color value set from one or more color value sets of corresponding one or more pixels in a corresponding frame of said second version of said video item, and determining a difference between said first color value set and said second color value set. A color set may comprise a set of values representing red, green and blue respectively (RGB color space) or representing lightness and green-red and blue-yellow color components (CIELAB color space), for example. This first manner of determining color differences is beneficial if such a first color value set exists in the video frame(s) (to which the light effect relates) for each of the light effects defined in a light script, and is especially beneficial if at least one pixel can be found in the video frame(s) that has exactly the same color value set as specified in relation to a light effect.

Said at least one processor may be configured to determine said color differences for each of said plurality of light effects by determining a first color value set from a plurality of color value sets of a plurality of pixels in a frame of said first version of said video item, determining a second color value set from a plurality of color value sets of a plurality of pixels in a corresponding frame of said second version of said video item, and determining a difference between said first color value set and said second color value set. This second manner of determining color differences is beneficial if each color (color value set) in the frame of the first version of the video has been changed in the same way or in a similar way.

Said at least one processor may be configured to determine said color differences for each of said plurality of light effects and adjust a color parameter of each of said plurality of light effects by determining for each color value specified in said color parameter of said light effect a difference between a quantity of pixels having said color value in a frame of said first version of said video item and a quantity of pixels having said color value in a corresponding frame of said second version of said video item, determining an adjustment value for each of said color values based on said determined quantity difference, and adjusting said color parameter of said light effect based on said determined adjustment values.

This can be implemented using histograms, for example. Preferably, a light effect specifies three color values in RGB color space, i.e. an R value, a G value and a B value. This third manner of determining color differences assumes that a change to a certain color value in the frame of the first version of the video is reflected by a change in the quantity of pixels having that certain color value and works especially well if this is the case. For example, if the number pixels having a certain amount of redness in a frame is reduced by 4%, a light effect specifying this amount of redness may be made 4% less red.

Said at least one processor is configured to generate a subset of said plurality of light effects for said first light script automatically from said first version of said video item using content analysis, generate a corresponding plurality of light effects for said second light script automatically from said second version of said video item using said content analysis, determine color differences between color parameters of said subset of said plurality of light effects and color parameters of said corresponding plurality of light effects, and adjust parameters of manually specified ones of said plurality of light effects specified in said first light script based on said determined color differences.

This fourth manner of determining color differences is beneficial if the creation of initial light scripts is partly automated. By performing the same automated method on both versions of the video item, the differences between the two resulting light scripts can be used to adjust manually specified light effects. The subset comprises less light effects than the plurality of light effects and thus less light effects than the first light script, but comprises at least one light effect, i.e. at least one light effect is generated automatically.

Said at least one processor may be configured to determine at least some of said color differences by comparing color information determined from a plurality of frames of said first version of said video item with color information determined from a

corresponding plurality of frames of said second version of said video item. Instead of comparing color information determined from a single frame, color information may be determined from a plurality of frames. This is beneficial if a light effect has a duration that spans multiple frames and may, for example, be used for only those light effects that have a duration that spans multiple frames.

Said at least one processor may be configured to determine a region of said video item for at least one of said light effects from said first light script and determine an adjustment of at least one of said parameters of said at least one light effect based on color differences in only said determined region of said video item. This is beneficial if it is known to which (spatial) region of the video item a light effect relates, e.g. from which region’s color values the light effect’s parameters were derived. Whether a region or the entire frame is used or, if only a region is used, which region is used, may differ per light effect.

Said at least one processor may be configured to disable or omit one or more of said plurality of light effects in said second light script if a determined adjustment of a color parameter of said one or more light effects would result in an undesirable color value or an undesirable set of color values, wherein the undesirable color value or the undesirable set of color values may be specified by an author of the first light script or wherein the undesirable color value or an undesirable set of color values may be user-configurable. This may be used together with any of the previously described four manners of determining color differences, for example. This may be used to help ensure that that the automatic process of adjusting the parameters of the light effects does not create undesired light effects.

Said at least one processor is configured to obtain said first image information directly by extracting said first image information from said first version of said video item. Alternatively, said first light script may specify origin information indicating for each of said plurality of light effects how one or more parameters of said light effect were determined from said first version of said video item and said at least one processor may be configured to obtain said first image information from said light effect parameters and said origin information. The latter may be used when the first version (e.g. the“master” version) of the video item is not available. The origin information specifies how a light effect was created (i.e. how its parameters were derived), e.g. from which region(s), from which frame(s) and/or in which way. The origin information may comprise key image/frame information such as color palette. For light effects that were created from a certain region, this origin information also makes it possible to analyze only this region. Said at least one processor may be configured to identify light effects in said first light script whose parameters should be maintained and include said identified light effects in said second light script without adjustments to their parameters. This allows the author of the first light script to exclude automatic adjustment of certain light effects that are less likely to be automatically adjustable (light effects that would no longer be in line with the author’s design when automatically adjusted), e.g. special light effects.

In a second aspect of the invention, the method comprises obtaining a first light script for at least part of a first version of a video item, obtaining first image information extracted from said first version of said video item, obtaining second image information extracted from at least part of a second version of said video item, determining image differences by comparing said first image information with said second image information, and determining a second light script for at least part of said second version of said video item by adjusting parameters of a plurality of light effects specified in said first light script based on said determined image differences. Said method may be performed by software running on a programmable device. This software may be provided as a computer program product.

Moreover, a computer program for carrying out the methods described herein, as well as a non-transitory computer readable storage-medium storing the computer program are provided. A computer program may, for example, be downloaded by or uploaded to an existing device or be stored upon manufacturing of these systems.

A non-transitory computer-readable storage medium stores at least one software code portion, the software code portion, when executed or processed by a computer, being configured to perform executable operations comprising: obtaining a first light script for at least part of a first version of a video item, obtaining first image information extracted from said first version of said video item, obtaining second image information extracted from at least part of a second version of said video item, determining image differences by comparing said first image information with said second image information, and determining a second light script for at least part of said second version of said video item by adjusting parameters of a plurality of light effects specified in said first light script based on said determined image differences.

As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a device, a method or a computer program product.

Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro- code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit", "module" or "system." Functions described in this disclosure may be implemented as an algorithm executed by a processor/microprocessor of a computer. Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied, e.g., stored, thereon.

Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a computer readable storage medium may include, but are not limited to, the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of the present invention, a computer readable storage medium may be any tangible medium that can contain, or store, a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber, cable, RF, etc., or any suitable combination of the foregoing. Computer program code for carrying out operations for aspects of the present invention may be written in any

combination of one or more programming languages, including an object oriented

programming language such as Java(TM), Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).

Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the present invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor, in particular a microprocessor or a central processing unit (CPU), of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer, other programmable data processing apparatus, or other devices create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of devices, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects of the invention are apparent from and will be further elucidated, by way of example, with reference to the drawings, in which:

Fig. 1 is a block diagram of an embodiment of the system of the invention; Fig. 2 is a flow diagram of a first embodiment of the method of the invention; Fig. 3 represents an example image before color grading;

Fig. 4 represents the example image of Fig.3 after color grading;

Fig. 5 is a flow diagram of a second embodiment of the method of the invention;

Fig. 6 is a flow diagram of a third embodiment of the method of the invention; Fig. 7 is a flow diagram of a fourth embodiment of the method of the invention;

Fig. 8 shows an example of a histogram of deltas between a pre-grade color histogram and a post-grade color histogram;

Fig. 9 is a flow diagram of a fifth embodiment of the method of the invention; and

Fig. 10 is a block diagram of an exemplary data processing system for performing the method of the invention.

Corresponding elements in the drawings are denoted by the same reference numeral. DETAILED DESCRIPTION OF THE EMBODIMENTS

Fig.l shows a first embodiment of the system of the invention. In the embodiment of Fig.l, the system comprises a single electronic device, a computer 1. In an alternative embodiment, the system comprises multiple electronic devices. The computer 1 comprises a processor 3, a transceiver 5 and storage means 7. The processor 3 is configured to obtain a first light script 13 for at least part of a first version 11 of a video item, e.g. to receive the first light script using transceiver 5 and store it in storage means 7, and obtain first image information extracted from the first version 11 of the video item. The video item may be a movie or TV program, for example.

The processor 3 may be configured to obtain the first image information directly by extracting the first image information from the first version 11 of the video item. Alternatively, the processor 3 may be configured to obtain the first image information from the light effect parameters and origin information specified in the first light script 13. The latter may be used when the first version 11 of the video item is not available at the computer 1. The origin information should indicate for each of the plurality of light effects how one or more parameters of the light effect were determined from the first version 11 of the video item, e.g. indicate a region from which the parameters of the light effect were derived and/or comprise key image/frame information such as color palette.

The processor 3 is further configured to obtain second image information extracted from at least part of a second version 17 of the video item and determine image differences by comparing the first image information with the second image information.

The image differences may comprise differences in color, differences in image resolution, differences in shot duration and/or differences in the number of represented dimensions, for example. The processor 3 may be configured to use the transceiver 5 to receive the second version 17 of the video item or the second image information extracted from it and store this video item or the second image information in storage means 7.

The processor 3 is further configured to determine a second light script 19 for at least part of the second version 17 of the video item by adjusting parameters of a plurality of light effects specified in the first light script 11 based on the determined image differences. The processor 3 may be configured to store the second light script 19 on storage means 7.

The processor 3 may be configured to use the transceiver 5 to transmit the second light script 19 to another system or to transfer it to a removable storage medium. This allows the second light script 19 to be made available to light rendering systems and applications. In the embodiment of the computer 1 shown in Fig.l, the computer 1

comprises one processor 3. In an alternative embodiment, the computer 1 comprises multiple processors. The processor 3 of the computer 1 may be a general-purpose processor, e.g. from Intel or AMD, or an application-specific processor. The processor 3 of the computer 1 may run a Windows or Unix-based operating system for example. The storage means 7 may comprise one or more memory units. The storage means 7 may comprise one or more hard disks and/or solid-state memory, for example. The storage means 7 may be used to store an operating system, applications, application data and content (e.g. the light scripts and video item versions), for example.

The transceiver 5 may use one or more wired and/or wireless communication technologies to communicate with other systems in a local area network or over the Internet, for example. In an alternative embodiment, multiple transceivers are used instead of a single transceiver. In the embodiment shown in Fig.l, a receiver and a transmitter have been combined into a transceiver 5. In an alternative embodiment, one or more separate receiver components and one or more separate transmitter components are used. The computer 1 may comprise other components typical for a computer such as a power connector and a display. The invention may be implemented using a computer program running on one or more processors.

A first embodiment of the method of the invention is shown in Fig.2. A step 101 comprises obtaining a first light script for at least part of a first version of a video item.

A step 103 comprises obtaining first image information extracted from the first version of the video item. In this embodiment, step 103 comprises extracting the first image information from the first version of the video item. In an alternative embodiment, step 103 comprises receiving the first image information from another system. The first image information may be stored temporarily or permanently in a memory. The first image information may comprise an average color or more complex color profile per frame, for example.

A step 105 comprises obtaining second image information extracted from at least part of a second version of the video item. In this first embodiment, step 105 comprises extracting the second image information from the second version of the video item in the same way the first image information was extracted from the first version of the video item in step 103. In an alternative embodiment, step 105 comprises receiving the second image information from another system. The second image information may be stored temporarily or permanently in a memory. The second image information may comprise an average color or more complex color profile per frame, for example. In this first embodiment, step 103 is performed after step 101 and step 105 is performed after step 103. In an alternative embodiment, these three steps are performed in a different order (e.g. first step 105, then step 103 and then step 101) or some or all of these steps are performed in parallel.

A step 107 comprises determining image differences by comparing the first image information with the second image information. Frames as captured by the camera may be uniquely identified. This could be used to ensure that the right frames are compared. In this embodiment, step 107 comprises reading first image information stored in step 103 for all frames and second image information stored in step 107 for all frames from one or more memories and storing the determined image differences in the one or more memories. In an alternative embodiment, steps 103, 105 and 107 are performed per frame and this is repeated for all frames, for example. For instance, first image information and second information may be extracted and compared for a first frame and then first image information and second information may be extracted and compared for a second frame, and so forth.

A step 109 comprises determining a second light script for at least part of the second version of the video item by adjusting parameters of a plurality of light effects specified in the first light script based on the determined image differences. In this embodiment, step 109 comprises reading the first light script from a memory, adjusting the parameters and storing the second light script in the same or to a different memory. In case of the former, the first light script may be overwritten by the second light script. In an alternative embodiment, the second light script is determined on the fly, i.e. while the video item is playing and light effects from the second light script are being rendered. In this case, the second script is only“virtually” created in a memory and light effects of the second light script are rendered immediately. This is especially beneficial for rendering devices with sufficient processing power to do the comparison and immediate playback in real-time.

The image differences may comprise color differences. Color differences may not only be the result of color grading, but also be the result of conversion from a HDR (High Dynamic Range, typically 10 bits per RGB color) version to a non-HDR version (typically 8 bits per RGB color). Figs.3 and 4 are used to help explain embodiments of the method of the invention in which color parameters are adjusted based on determined color differences.

Fig.3 represents an example image 31 before color grading. The image (frame) 31 shows a lighthouse with a first part having a color 35 and a second part having a color 36. The lighthouse emits a light beam with a color 34. The lighthouse is shown in region 39 of the image 31. The image further shows a cloud having a color 33 and a land area having a color 37. Fig.4 represents the image (frame) 41 that is the result of applying a certain color grading to image 31. Colors 33-37 have been changed into colors 43-47, respectively. Colors 43-47 may be bluer than color 33-37 (i.e. the color grading involved a shift to the blue spectrum), for example.

A second embodiment of the method of the invention is shown in Fig.5. In this second embodiment, step 103 comprises a step 111, step 105 comprises a step 113, step 107 comprises a step 115, and step 109 comprises a step 117. Step 111 comprises determining one or more pixels in a frame of the first version of the video item having a first color value set, e.g. color 36 in Fig.3. The first color value set (e.g. R, G and B values in RGB color space) is specified in relation to a first light effect. For example, the first light effect may specify that the color (112, 108, 108) is rendered for a certain duration.

Step 113 comprises determining a second color value set, e.g. color 46 in

Fig.4, from one or more color value sets of corresponding one or more pixels in a

corresponding frame of the second version of the video item. Step 115 comprises determining a difference between the first color value set and the second color value set. Step 117 comprises applying the determined color difference to the color parameter of the first light effect. Steps 111-117 are repeated for other light effects in the first light script.

A third embodiment of the method of the invention is shown in Fig.6. In this third embodiment, step 103 comprises a step 121, step 105 comprises a step 123, step 107 comprises a step 125, and step 109 comprises a step 127. Step 121 comprises determining a first color value set (i.e. part of the first image information) from a plurality of color value sets of a plurality of pixels in a first frame of the first version of the video item. This plurality of pixels may comprise all pixels in the first frame, for example. Step 123 comprises determining a second color value set (i.e. the second image information) from a plurality of color value sets of a plurality of pixels in a corresponding frame of the second version of the video item.

Both pluralities of pixels are preferably the same, i.e. all pixels in the frame or pixels in a certain region of the frame. The first and second color values set may each comprise R, G and B values (i.e. three values) in RGB color space, for example. The first color value set and the second color value set may be, for example, the average or (tri-)mean or median color for one or multiple frames (or a region thereof) of the first version and the second version, respectively. Step 125 comprises determining a difference between the first color value set and the second color value set. Step 127 comprises applying this difference to the color parameters of all light effects relating to this frame. Steps 121-127 are repeated for other frames (or other sets of frames), e.g. other frames to which light effects are

synchronized.

A fourth embodiment of the method of the invention is shown in Fig.7. In this fourth embodiment, step 103 comprises a step 131, step 105 comprises a step 133, step 107 comprises steps 135 and 137 and step 109 comprises a step 139. Step 131 comprises determining a quantity of pixels having the color value in a frame of the first version of the video item. Step 133 comprises determining a quantity of pixels having the color value in a corresponding frame of the second version of the video item.

Step 135 comprises determining for each color value specified in the color parameter of a first light effect a difference between the quantity of pixels having the color value in a frame of the first version of the video item and the quantity of pixels having the color value in a corresponding frame of the second version of the video item. Step 137 comprises determining an adjustment value for each of the color values based on the determined quantity difference. Step 139 comprises adjusting the color parameter of the first light effect based on the determined adjustment values.

Steps 131-139 are repeated for other light effects in the first light script. In an alternative embodiment, steps 131 and 133 are performed first for all light effects and then steps 135-139 are performed for all light effects, for example. In an alternative embodiment, steps 131-135 are performed first for all light effects and then steps 137 and 139 are performed for all light effects, for example.

This fourth embodiment is illustrated with the help of Fig.8. In the example of Fig.8, histograms are determined per RGB color channel: color histograms 71 for the Red channel, color histograms 73 for the Green channel and color histograms 75 for the Blue channel. Color histogram 61 is the pre-grade histogram for the Red channel. Color histogram 63 is the post-grade histogram for the Red channel. Color histogram 65 is the delta histogram for the Red channel. Color histogram 66 is the delta histogram for the Green channel. Color histogram 67 is the delta histogram for the Blue channel. Each channel can have a value from 0 to 255 (8-bit color). Each histogram indicates a number of pixels per color value. For example, a frame may comprise 2 million pixels and 10.000 pixels may have the color value 20 for Red, i.e. in the Red channel.

The light effect in the first light script may specify that a color (112, 108, 108) should be rendered for a certain duration. From the delta histogram 65, it can be seen that for a color value 112 for Red, 10 less pixels have this color in the second version of the video item than in the first version of the video item. From the delta histogram 66, it can be seen that for a color value 108 for Green, 22 more pixels have this color in the second version of the video item than in the first version of the video item.

From the delta histogram 67, it can be seen that for a color value 108 for Blue, 7 less pixels have this color in the second version of the video item than in the first version of the video item. Therefore, the color (112, 108, 108) may be changed into (112-10, 108+22, 108-7), i.e. (102, 130, 101). Thus, the delta histograms 65-67 are used as lookup tables for color modification. In the example of Fig.8, the delta is absolute (i.e. 10 pixels less results in the color value being reduced by 10), but using a proportionate delta may lead to better results, e.g. 2% less pixels could result in the color value being reduced by 2%.

A fifth embodiment of the method of the invention is shown in Fig.9. In this fifth embodiment, the steps used for creating the first light script are shown as well. A step 151 comprises generating a subset of the plurality of light effects for the first light script (i.e. not all of the light effects in the first light script, but at least one) automatically from the first version of the video item using content analysis. A user may specify parameters which this automatic process uses, e.g. regions to be analyzed or not to be analyzed, or these parameters may be determined automatically.

Step 153 comprises allowing a user to edit automatically created light effects and/or allowing a user to add light effects, e.g. special light effects. While ambience light effects represent the ambience of a scene or shot and are usually based on colors of multiple frames or of a region in multiple frames, special light effects are associated with an event, which may take place inside or outside the image. A special effect may be derived from one or more frames, a region in one or more frames or may be determined manually, e.g. based on audio.

Next, step 101 as shown in Fig.2 is performed. In step 101, the light script created in steps 151 and 153, i.e. the first light script, is obtained. Step 154 comprises extracting color parameters of the subset of the plurality of light effects from the first light script. The first light script may identify this subset, i.e. may identify which light effects have been generated automatically and/or which light effects have been generated or edited manually. In an alternative embodiment, these color parameters are obtained by analyzing the first version of the video item again in the same way as the first version of the video item was analyzed in step 151.

Step 105 comprises a step 155 and a step 156. Step 155 comprises generating a corresponding plurality of light effects for the second light script automatically from the second version of the video item using the same content analysis (i.e. in the same way as in step 151). Step 156 comprises identifying the color parameters of these corresponding plurality of light effects. Step 107 comprises a step 157. Step 157 comprises determining color differences between the color parameters of the subset of the plurality of light effects and the color parameters of the corresponding plurality of light effects.

Step 109 comprises a step 159. Step 159 comprises adjusting parameters of manually specified ones of the plurality of light effects specified in the first light script based on the determined color differences. As a result, the manually specified parameters are adjusted similarly as the automatically determined parameters. The second light script combines the results of steps 155 and 159.

Although the previously described embodiments primarily describe comparing color information from single frames, at least some of the color differences may be determined by comparing color information determined from a plurality of frames of the first version of the video item with color information determined from a corresponding plurality of frames of the second version of the video item.

Although the previously described embodiments primarily describe comparing color information from entire frames, a region of the video item may be determined for at least one of the light effects from the first light script and an adjustment of at least one of the parameters of the at least one light effect may be determined based on color differences in only the determined region of the video item. An example of such a region is region 39 of Figs.3 and 4. The region may be determined from origin information specified in the first light script, for example. Furthermore, for special light effects (e.g. non-ambiance), per frame analysis can be done to identify what specific areas of the frame where used to extract color values, then the same regions in the post grading frames can be analyzed and colors in the light effect can be replaced accordingly.

The above-described adjustments may be made for each light effect specified in the first light script. However, not all light effects should necessarily be adjusted. As a first example, one or more of the plurality of light effects may be omitted or disabled in the second light script if a determined adjustment of a color parameter of the one or more light effects would result in an undesirable color value or an undesirable set of color values. Which color values are undesirable may be specified by the author of the first light script or may be user-configurable. As a second example, light effects in the first light script whose parameters should be maintained (e.g. as desired by the author of the first light script) may be identified (e.g. in the first light script) and the identified light effects may be included in the second light script without adjustments to their parameters. This is especially useful for special effects that were not determined based on color (e.g. were determined based on audio).

Fig.10 depicts a block diagram illustrating an exemplary data processing system that may perform the method as described with reference to Figs. 2, 5, 6, 7 and 9.

As shown in Fig.10, the data processing system 300 may include at least one processor 302 coupled to memory elements 304 through a system bus 306. As such, the data processing system may store program code within memory elements 304. Further, the processor 302 may execute the program code accessed from the memory elements 304 via a system bus 306. In one aspect, the data processing system may be implemented as a computer that is suitable for storing and/or executing program code. It should be appreciated, however, that the data processing system 300 may be implemented in the form of any system including a processor and a memory that is capable of performing the functions described within this specification.

The memory elements 304 may include one or more physical memory devices such as, for example, local memory 308 and one or more bulk storage devices 310. The local memory may refer to random access memory or other non-persistent memory device(s) generally used during actual execution of the program code. A bulk storage device may be implemented as a hard drive or other persistent data storage device. The processing system 300 may also include one or more cache memories (not shown) that provide temporary storage of at least some program code in order to reduce the quantity of times program code must be retrieved from the bulk storage device 310 during execution. The processing system 300 may also be able to use memory elements of another processing system, e.g. if the processing system 300 is part of a cloud-computing platform.

Input/output (I/O) devices depicted as an input device 312 and an output device 314 optionally can be coupled to the data processing system. Examples of input devices may include, but are not limited to, a keyboard, a pointing device such as a mouse, a microphone (e.g. for voice and/or speech recognition), or the like. Examples of output devices may include, but are not limited to, a monitor or a display, speakers, or the like.

Input and/or output devices may be coupled to the data processing system either directly or through intervening I/O controllers.

In an embodiment, the input and the output devices may be implemented as a combined input/output device (illustrated in Fig.10 with a dashed line surrounding the input device 312 and the output device 314). An example of such a combined device is a touch sensitive display, also sometimes referred to as a“touch screen display” or simply“touch screen”. In such an embodiment, input to the device may be provided by a movement of a physical object, such as e.g. a stylus or a finger of a user, on or near the touch screen display.

A network adapter 316 may also be coupled to the data processing system to enable it to become coupled to other systems, computer systems, remote network devices, and/or remote storage devices through intervening private or public networks. The network adapter may comprise a data receiver for receiving data that is transmitted by said systems, devices and/or networks to the data processing system 300, and a data transmitter for transmitting data from the data processing system 300 to said systems, devices and/or networks. Modems, cable modems, and Ethernet cards are examples of different types of network adapter that may be used with the data processing system 300.

As pictured in Fig.10, the memory elements 304 may store an application 318. In various embodiments, the application 318 may be stored in the local memory 308, the one or more bulk storage devices 310, or separate from the local memory and the bulk storage devices. It should be appreciated that the data processing system 300 may further execute an operating system (not shown in Fig.10) that can facilitate execution of the application 318. The application 318, being implemented in the form of executable program code, can be executed by the data processing system 300, e.g., by the processor 302. Responsive to executing the application, the data processing system 300 may be configured to perform one or more operations or method steps described herein.

Various embodiments of the invention may be implemented as a program product for use with a computer system, where the program(s) of the program product define functions of the embodiments (including the methods described herein). In one embodiment, the program(s) can be contained on a variety of non-transitory computer-readable storage media, where, as used herein, the expression“non-transitory computer readable storage media” comprises all computer-readable media, with the sole exception being a transitory, propagating signal. In another embodiment, the program(s) can be contained on a variety of transitory computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non- writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., flash memory, floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. The computer program may be run on the processor 302 described herein. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.

The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of embodiments of the present invention has been presented for purposes of illustration, but is not intended to be exhaustive or limited to the implementations in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the present invention. The embodiments were chosen and described in order to best explain the principles and some practical applications of the present invention, and to enable others of ordinary skill in the art to understand the present invention for various embodiments with various modifications as are suited to the particular use contemplated.