Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DEVICE FOR PROCESSING AN IMAGE ACCORDING TO LIGHTING INFORMATION
Document Type and Number:
WIPO Patent Application WO/2018/234195
Kind Code:
A1
Abstract:
A method and device for processing an image. To reach that aim, a first information representative of at least a part of the lighting of a scene is obtained from each device of a plurality of devices (20, 101, 102); a second information representative of a spatial model of the lighting of at least an area encompassing a display (11) located in the scene is determined according to the obtained first information and according to a pose of the display (11) with regard to the plurality of devices (20, 101, 102); and an image to be displayed on the display (11) is processed according to the determined second information.

Inventors:
ROBERT PHILIPPE (FR)
DUCHENE SYLVAIN (FR)
STAUDER JURGEN (FR)
Application Number:
PCT/EP2018/066017
Publication Date:
December 27, 2018
Filing Date:
June 15, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERDIGITAL CE PATENT HOLDINGS (FR)
International Classes:
G09G5/00
Foreign References:
US20100079426A12010-04-01
US20120133790A12012-05-31
US20130321618A12013-12-05
Other References:
None
Attorney, Agent or Firm:
MERLET, Hugues et al. (FR)
Download PDF:
Claims:
CLAIMS

1 . A method of processing an image, the method comprising:

- obtaining (61 ) first information representative of at least a part of the lighting of a scene (1 ) from each device of a set of at least one devices

(20; 101 to 103), said set comprising at least a light source, the obtaining comprising receiving lighting information from said at least a light source;

- determining (62) second information representative of a spatial model of the lighting of at least an area encompassing a display (1 1 ) located in said scene (1 ) according to the obtained first information and according to a pose of said display (1 1 ) with regard to each device of said set (20; 101 to 103);

- processing (63) an image to be displayed on said display (1 1 ) according to the determined second information.

2. The method according to claim 1 , wherein the determining of the second information is further according to a distance between said display (1 1 ) and each device of said set.

3. The method according to claim 1 or 2, wherein said determining of the second information is further according to a third information representative of at least a type associated with each device (20; 101 to 103) of said set, the at least a type belonging to a group of types comprising:

- light emitter;

- light receiver; and

- light sensor device.

4. The method according to one of claims 1 to 3, wherein the determining of said second information is further according to the pose of said display (1 1 ) with regard to a determined point of view (1001 , 1002).

5. The method according to one of claims 1 to 4, wherein said processing comprises modifying at least a parameter of spatial areas of said image according to the location of said spatial areas in said image.

6. The method according to claim 5, wherein said at least a parameter belongs to a group of parameters comprising:

- luminance;

- chrominance;

- saturation ;

- contrast; and

- hue.

7. The method according to one of claims 1 to 6, wherein said display (1 1 ) corresponds to a device of said set.

8. An apparatus (7) configured to process an image, the apparatus comprising a memory (77, 721 ) associated with a processor (71 , 720) configured to:

- obtain first information representative of at least a part of the lighting of a scene from each device of a set of at least one devices, said set comprising at least a light source, the obtaining comprising receiving lighting information from said at least a light source;

- determine second information representative of a spatial model of the lighting of at least an area encompassing a display located in said scene according to the obtained first information and according to a pose of said display with regard to each device of said set;

- process an image to be displayed on said display according to the determined second information.

9. The apparatus according to claim 8, wherein the processor (71 , 720) is further configured to determine the second information according to a distance between said display and each device of said set. 1 0. The apparatus according to claim 8 or 9, wherein the processor (71 , 720) is further configured to determine the second information according to a third information representative of at least a type associated with each device of said set, the at least a type belonging to a group of types comprising:

- light emitter;

- light receiver; and

- light sensor device.

1 1 . The apparatus according to one of claims 8 to 10, wherein the processor (71 , 720) is further configured to determine said second information according to the pose of said display with regard to a determined point of view. 12. The apparatus according to one of claims 8 to 1 1 , wherein the processor (71 , 720) is further configured to process said image by modifying at least a parameter of spatial areas of said image according to the location of said areas in said image. 13. The apparatus according to claim 12, wherein said at least a parameter belongs to a group of parameters comprising:

- luminance;

- chrominance;

- saturation;

- contrast; and

- hue.

14. The apparatus according to one of claims 8 to 13, wherein said display corresponds to a device of said set.

15. A non-transitory processor readable medium having stored therein instructions for causing a processor to perform the method according to one of claims 1 to 7.

Description:
METHOD AND DEVICE FOR PROCESSING AN IMAGE ACCORDING TO

LIGHTING INFORMATION

1. Technical field

The present disclosure relates to the domain of image processing, for example in the context of adapting an image displayed on a display device to the lighting environment of the display device or in the context of augmented-reality content displayed on a display device. 2. Background

This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, these statements are to be read in this light, and not as admissions of prior art.

Scene with artificial illumination, such as indoor environment, may have more than one source of illumination, which results in a complex lighting environment. This lighting environment may disturb a user watching a screen surface - such as a television set or a tablet - due to its interaction with the lighting environment. Typically, light interactions with object surfaces include diffuse, specular reflections and cast shadows.

Diffuse reflections occur for any surfaces where the amount of light is reflected equally in all directions. Specular reflections, as opposed to diffuse reflections, occur at glossy surfaces of objects where substantial amounts of light are reflected in the user direction. Specular reflections will cause the human visual system to lower its sensitivity and details of an image displayed on a screen surface are less visible for the user. Similarly, in mixed lighting conditions the hue of the lighting sources might be different. Diffuse reflections will produce as well different hue reflection on the screen surface that will degrade the user experience. In both cases, the visual quality is reduced.

Cast shadows occur if a first object hinders light of a light source to reach a second object, e.g. the screen of a display device. Cast shadows on a screen surface are often much darker than the surrounding areas leading to fading out of image contrast outside the cast shadow while preserving contrast within the cast shadow. 3. Summary

References in the specification to "one embodiment", "an embodiment", "an example embodiment", "a particular embodiment" indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.

The present disclosure relates to a method of processing an image, the method comprising:

- obtaining first information representative of at least a part of the lighting of a scene from each device of a plurality of devices;

- determining second information representative of a spatial model of the lighting of at least an area encompassing a display located in the scene according to the obtained first information and according to a pose of the display with regard to the plurality of devices;

- processing an image to be displayed on the display according to the determined second information.

According to a characteristic, the determining of the second information is further according to a distance between the display and each device of the plurality of devices.

According to a specific characteristic, the determining of the second information is further according to a third information representative of at least a type associated with each device, the at least a type belonging to a group of types comprising:

- light emitter;

- light receiver; and

- light sensor device.

According to another characteristic, the determining of the second information is further according to the pose of the display with regard to a determined point of view. According to a particular characteristic, the processing comprises modifying at least a parameter of spatial areas of the image according to the location of the spatial areas in the image.

According to another characteristic, the at least a parameter belongs to a group of parameters comprising:

- luminance;

- chrominance;

- saturation;

- contrast; and

- hue.

According to a specific characteristic, the display corresponds to a device of said plurality of devices.

The present disclosure also relates to a device configured to perform the abovementioned method of processing an image. To reach that aim, the device comprises a memory associated with a processor configured to:

- obtain first information representative of at least a part of the lighting of a scene from each device of a plurality of devices;

- determine second information representative of a spatial model of the lighting of at least an area encompassing a display located in the scene according to the obtained first information and according to a pose of the display with regard to the plurality of devices;

- process an image to be displayed on the display according to the determined second information.

The present disclosure also relates to a device for processing an image, the device comprising:

- means for obtaining a first information representative of at least a part of the lighting of a scene from each device of a plurality of devices;

- means for determining a second information representative of a spatial model of the lighting of at least an area encompassing a display located in the scene according to the obtained first information and according to a pose of the display with regard to the plurality of devices;

- means for processing an image to be displayed on the display according to the determined second information. The present disclosure also relates to a computer program product comprising instructions of program code for executing, by at least one processor, the abovementioned method of processing a first image, when the program is executed on a computer.

The present disclosure also relates to a (non-transitory) processor readable medium having stored therein instructions for causing a processor to perform at least the abovementioned method of processing an image.

4. List of figures

The present disclosure will be better understood, and other specific features and advantages will emerge upon reading the following description, the description making reference to the annexed drawings wherein :

- figure 1 shows an example of a scene with a display device and a plurality of light sources, in accordance with an example of the present principles; - figure 2 shows the exchange of lighting information between the display device and at least a part of the light sources of the scene of figure 1 , in accordance with an example of the present principles;

- figure 3 shows the exchange of lighting information between the display device and at least a part of the light sources of the scene of figure 1 via a remote apparatus, in accordance with an example of the present principles;

- figure 4 shows a process to determine a model of at least a part of the lighting environment of the scene of figure 1 , in accordance with an example of the present principles;

- figure 5 shows a diagram of a light source of figure 1 , in accordance with an example of the present principles;

- figure 6 shows a method of processing an image displayed on the display device of the scene of figure 1 , in accordance with an example of the present principles; and

- figure 7 diagrammatically shows the structure of an apparatus adapted to implement the method of figure 6 and/or the process of figure 4, in accordance with an example of the present principles.

5. Detailed description of embodiments

The subject matter is now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. It can be evident, however, that subject matter embodiments can be practiced without these specific details.

The present principles will be described in reference to a particular embodiment of a method for processing an image (and the apparatus configured / adapted to implement the method). In an environment comprising one or more light sources, a first information representative of the lighting of the environment is obtained, for example received, from each light source of at least a part of the light sources. The first information corresponds for example to the intensity of the light emitted by a light source and/or the location of the light source and/or the color of the light emitted by the light source. A second information that is representative of the spatial modelling of the lighting of an area surrounding and/or comprising a display screen of the environment is determined, the second information being determined from the first information and from a pose of the display screen with regard to the light source(s). An image displayed on the display screen may then be processed knowing the second information.

Establishing a model of the lighting of an area that surrounds and/or comprise a display screen and processing the image(s) displayed in the display screen enables to increase the quality of the displayed image(s) by for example considering the reflections of light induced by the light sources lighting the display screen or the shadows casted by the light sources on the display screen or the variation of lighting over the surface of the display screen for example. Processing the image(s) displayed in the display screen may also include display screen technology dependent parameters. For example, if the display screen technology is Liquid Crystal Display (LCD) with localized backlighting, processing the image(s) may include to process the image in order to calculate the control of the LCD panel and the control of the localized backlighting layer. Figure 1 shows a scene 1 comprising a plurality of display devices

10, 1 1 and 12 and a plurality of light sources 101 to 103, according to a particular and non-limiting embodiment of the present principles.

The scene 1 is an indoor environment according to the example of figure 1 . According to this example, the light sources 101 , 102 and 103 are of different nature. The light sources 101 corresponds to point light source, for example spots. The light sources 102 corresponds to area light sources, for example neon light or fluorescent light. The light source 103 also corresponds to an area light source but with a lighting that is more diffuse than the lighting of the light sources 102. The display devices 10 to 12 may also be considered as light sources as they emit light while displaying image(s). While not shown, the scene 1 may comprise further light sources, for example windows bringing outdoor light (e.g. from the sun or street lamp(s)) or doors.

The display devices 10 to 12 are of different nature in the example of figure 1 . The display device 1 1 corresponds for example to a tablet and the display devices 10 and 12 each corresponds to a screen such as a television screen, for example a LCD (Liquid Crystal Display) screen, an OLED (Organic Light-Emitting Diode) screen or a QLED (Quantum Light-Emitting Diode) screen. The display device 1 1 is onto a table and the screens 10 and 12 are each arranged on a different wall of the room of the scene 1 .

The scene 1 further comprises objects, such as a table, chairs 1001 , 1002 or cups that may be considered as indirect light source (as they may reflect a part of the light they receive from the light sources 101 to 103) and that may cast shadows on part(s) of the display devices 10 to 12. Chairs 101 and 1002 provide two examples of different points of view for image(s) or video content(s) displayed on the display devices 10 to 12.

The number and the nature of light sources is not limited to the example of figure 1 . The number and the nature of the display devices is not limited to the example of figure 1 . For example, the scene 1 may comprise only one display device or any number of display devices of any nature, for example a screen onto which is projected an image by a video projector. Figure 2 shows the obtaining of lighting information, called first information, by a display device corresponding to the tablet 1 1 of the scene 1 , according to a particular and non-limiting embodiment of the present principles.

According to the example of figure 2, the display device 1 1 receives the first information from different light sources 101 and 102 and from a device 20 corresponding for example to a webcam or to a light sensor. The light sensor corresponds to a photosensor or to an array of photosensors, a photosensor being for example a photoresistor, a photodiode or a phototransistor, possibly with a color filter in front in order to be spectrally selective. A webcam or any image acquisition device comprises a photosensor and color filter array that acquires information about at least a part of the lighting of an environment, e.g. the scene 1 . Each light source 1 01 , 1 02 may be a wireless connected device that transmits light information (the so-called first information) wirelessly to the tablet 1 1 . The transmission of the first information may be based on WiFi® (I EEE 802.1 1 -201 6 for example), on the Zigbee Light Link protocol that is part of the ZigBee 3.0 standard or on the Z-Wave protocol for example. The device 20 may also be a wireless connected device that transmits light information to the tablet. The light information may be determined by the photosensor(s) of the device. The device may for example be used to measure outdoor light received by the environment of the scene 1 through windows or to measure the ambient light of the environment of the scene 1 . According to a variant, the device 20 is connected to the tablet via a wire, for example using USB (Universal Serial Bus). According to another variant, the device 20 is embedded into the tablet and corresponds for example to the light sensor of the tablet 1 1 (used to determine ambient lighting and to adjust automatically the brightness of the screen of the tablet) or to the front and/or rear camera of the tablet 1 1 .

The first information transmitted by the light source(s) 1 01 , 1 02 and/or by the device 20 (and received by the tablet) comprises for example one or more lighting parameters that belong to a group of parameters comprising:

- parameter representative of light intensity or luminous intensity, e.g. in candela;

- parameter representative of the color of the light, e.g. wavelength, band of wavelengths or color temperature;

- parameter representative of the type of the light source, e.g. direct

(or point-like), diffuse or ambient;

- parameter representative of the direction of the light beams, e.g. a main direction of a main light beam and the size of its solid angle or a geometrical light distribution function defined for spherical or half- spherical directions;

- parameter representative of the brightness, e.g. in lumen ;

- parameter representative of the color rendering index (CRI), for example represented by a number on a scale from 0 to 100 with 0 being "poor" and 1 00 being "excellent", the lower the number, the more distorted a color will look like under the light source;

- parameter representative of the correlated color temperature; and - parameters representative of the 3D shape of each light source, if not modelled as point light source: assuming a representation with planar patches, each patch may be described by a polygon as a set of vertices.

The first information corresponds to photometric information with optionally shape information.

In addition to the first information, each light source 1 01 , 1 02 and/or the device 20 may transmit information on its location in the scene, for example its coordinates (x, y and z) in the space / framework of the scene 1 and/or its orientation.

Figure 5 shows a diagram of a connected device, specifically a connected light source 1 01 , according to a particular and non-limiting embodiment of the present principles.

The connected light source 1 01 is for example a LED light source.

The LED light source 1 01 comprises a housing 51 , a LED driving power circuit 52, a RF (Radio-Frequency) circuit 53 (e.g. a ZigBee module, a WiFi® module, a Bluetooth module or a Z-wave module), a LED lamp panel 54 and a lamp shade 55. The RF circuit 53 is adapted to transmit and/or receive data. The RF circuit 53 is for example configured to transmit first information on the lighting characteristics of the light source 1 01 and/or information representative of the location of the light source 1 01 within the scene and/or information regarding the type of the light source 1 01 . 1 The RF circuit 53 is for example configured to receive control parameters to control the operation of the light source 1 01 , for example to control the intensity of the lighting and/or the duration of the lighting and/or the color of the lighting.

Figure 3 shows the obtaining of lighting information, called first information, by a display device corresponding to the tablet 1 1 of the scene 1 , according to a particular and non-limiting embodiment of the present principles.

The example of Figure 3 takes over the elements of the example of Figure 2 by arranging a remote device 30 between the light sources 1 01 , 1 02 and the device 20 on one hand and the display device 1 1 on the other hand. According to the example of figure 3, the display device 1 1 receives the first information from the remote device 30 that received light information from different light sources 1 01 and 1 02 and from the device 20. The remote device 30 may for example be a set-top box, a gateway, a computer, a server, a storage unit or any apparatus adapted to receive the first information from the light sources 101 , 102 and device 20 and transmit said first information to the display device 1 1 . According to a variant, the remote device 30 processes the received first information to determine a second information representative of the modelling of the lighting around the display device. According to this variant, the remote device may transmit the second information to the display device 1 1 . The remote device 30 may receive the first information wirelessly from the light sources 101 , 102 and/or the device 20 or via wired connection (e.g. vie USB or Ethernet). The remote device 30 may transmit the first information and/or the second information wirelessly (e.g. using any wireless transmission protocol such as WiFi®, Zigbee, Z-Wave or Bluetooth) or via wired connection (e.g. vie USB or Ethernet) to the display device 1 1 . Figure 4 shows a process to determine a model of at least a part of the lighting environment of the scene of figure 1 , according to a particular and non-limiting embodiment of the present principles. The process of figure 4 will be described with the part of the scene 1 that comprises the display device 1 1 to determine how this area of the scene 1 reflects light received from the different light sources of the environment of the scene 1 . Said part of the scene 1 comprises the surface formed by the screen of the display device with optionally an area surrounding the display device, for example an area having a determined width around the display device, e.g. 20 cm, 50 cm or 1 m. As an example, the Phong reflection model, which is a local illumination model, is used to determine the spatial model of light associated with said part of the scene 1 .

For each light source 101 to 103 and for the lighting information retrieved from the light sensor 20 of the scene 1 , components is and id are defined as the intensities (e.g. as RGB values) of the specular and diffuse components of each light source. A single term i a may control the ambient lighting, which may for example be computed as a sum of contributions from all light sources that neither creates shadows nor specular effects. In the example of Figure 4, only one light source 101 is considered for clarity purpose of illustration. It is naturally understood that the same process may be applied for all light sources 101 to 103 or to a part of them, for example to decrease the computation costs. Following parameters are defined, the parameters depending from the characteristics of the surface of said area of the scene 1 , for example depending from the material of the surface:

k s , which is a specular reflection constant defining the relative amount of specular reflection caused by the incoming light,

kd, which is a diffuse reflection constant defining the relative amount of diffuse reflection caused by the incoming light (Lambertian reflectance), k a , which is an ambient reflection constant, the amount of reflection of the ambient lighting, and

a, which is a shininess constant for this material, which is larger for surfaces that are smoother and more mirror-like. When this constant is large the specular highlight has a small spatial footprint.

Further terms used in the equations are defined as follow:

lights, which is the set of all light sources,

ί,η , which is the direction vector from the point x 41 on the surface 1 1 toward the light source (m defining the light source, which may be the light source 101 )

N, which is the surface normal at point x 41 on the surface,

R j n , which is the direction that a perfectly reflected ray of light would take from this point x 41 on the surface, and

V, which is the direction pointing towards the viewer 42 (such as a virtual camera or a viewer sat in the chair 1001 or 1002 for example).

Then the Phong reflection model provides an equation for computing the illumination l r (x) of each surface point x, for example the surface point x 41 :

where the direction vector R^ is calculated as the reflection of on the surface characterized by the surface normal N using

= 2 ( . v)7v - and the vectors being normalized. The diffuse term is not affected by the viewer direction V. The specular term is large only when the viewer direction V is aligned with the reflection direction R^. Their alignment is measured by the a power of the cosine of the angle between them. The cosine of the angle between the normalized vectors R,n and V is equal to their dot product. When a is large, in the case of a nearly mirror-like reflection, the specular highlight will be small, because any viewpoint not aligned with the reflection will have a cosine less than one which rapidly approaches zero when raised to a high power. Although the above formulation is the common way of presenting the Phong reflection model, each term should only be included if the term's dot product is positive. (Additionally, the specular term should only be included if the dot product of the diffuse term is positive.) When the color is represented as RGB values, this equation is typically modeled separately for R, G and B intensities, allowing different reflections constants k a , kd and k s for the plurality of color channels.

When considering a screen surface S of the screen of the display device, in addition to reflecting light / r (x) at any point x, the screen emits light I e (x) . For each point x belonging to S, the observer receives : h {x) = I r (x) + l e {x)

The display is a connected object and the intrinsic photometric parameters of its screen are supposed to be known: in the above Phong model, this is shininess a and reflectance constants k a , k d and k s . In addition, the interconnected network has information about locations and orientations of the different objects (emitter, receiver and observer devices) in a world coordinate system. So, directions N, L,R,V and I r (x) at each point x of the screen S may be computed. The connected system can continuously update these parameters as the scene is changing. In this case, updated first information is transmitted and updated second information is determined and transmitted. Therefore, the light reflected by the screen can change over time (we note it I r (x, t) ), the time being noted t.

If we consider that a video I e (x, t) is emitted by the screen display and that the light I r (x, t) reflected by the screen is changing over time, the video can be modified so that the observer receives the intensity l 0 (x, t) corresponding to the video:

I 0 (x, t) = I i (x, t) - I r (x, t)

Naturally, other methods may be used to determine the incident and/or reflected illumination at points of said surface, the different values of illumination for the different points of the surface forming the spatial modelling of the lighting of the surface encompassing the display device 1 1 with optionally an area surrounding the display device 1 1 . For example, well-known global illumination modelling methods may be used such as the ray tracing method that allows to consider the propagation of light from the light sources of the scene including reflections and multiple reflections on surfaces of the object(s) of the scene 1 . A global illumination modelling method may be combined with a local illumination model. For example, light received and emitted by surfaces of some of the objects might be modelled using ray tracing, while for the areas of the scene 1 comprising display devices, a local modelling method such as the Phong reflection model may be used to model the illumination of screen surfaces of the display devices of the scene 1 .

Figure 6 shows a method of processing an image displayed on a display device of the scene of 1 , for example on the display device 1 1 , according to a particular and non-limiting embodiment of the present principles.

In a first operation 61 , first information representative of the lighting of a scene 1 or of a part of the scene 1 is obtained, i.e. received and/or determined from a plurality of devices of the scene. The plurality of devices comprises for example one or more light sources and/or one or more display devices and/or one or more light sensor devices. The first information may for example be received from each light source of at least a part of the light sources of the scene 1 and /or determined from one or more light sensors (e.g. a camera comprising a photosensor and color filter array and/or one or more light sensors of the display device 1 1 ). The first information may comprise one parameter from the following list or any combination of two or more parameters of the following list of parameters:

- parameter representative of light intensity or luminous intensity, e.g. in parameter representative of light intensity or luminous intensity, e.g. in candela;

- parameter representative of the color of the light, e.g. wavelength, band of wavelengths or color temperature;

- parameter representative of the type of the light source, e.g. direct (or point-like), diffuse or ambient;

- parameter representative of the direction of the light beams, e.g. a main direction of a main light beam and the size of its solid angle or a geometrical light distribution function defined for spherical or half- spherical directions;

- parameter representative of the brightness, e.g. in lumen;

- parameter representative of the color rendering index (CRI), for example represented by a number on a scale from 0 to 100 with 0 being "poor" and 1 00 being "excellent", the lower the number, the more distorted a color will look like under the light source;

- parameter representative of the correlated color temperature; and

- parameters representative of the 3D shape of each light source, if not modelled as point light source: assuming a representation with planar patches, each patch may be described by a polygon as a set of vertices.

The first information is for example received wirelessly or via a wired connection.

In a second operation 62, second information representative of a spatial model of the lighting of at least an area of the scene 1 encompassing the display device 1 1 is determined. The second information is determined according to the first information obtained in the first operation 61 and according to pose information of the display device 1 1 with regard to the other devices of the scene, the other devices providing for example the first information on the lighting of the environment. Pose information enables for example to determine incidence angle of incident light emitted by the light source(s) and reaching at least a part of the display device 1 1 .

The screen can be simply described by the 3D location of its four corners. The area of the screen lit directly by light sources can be identified.

To deal with specular reflections and other light-material effects, the 3D surface of the screen can be advantageously modeled. If planar, the model can correspond to the 3D location of the four corners. If not planar, a more complex model can be used to describe the curvature of the screen. A possible representation of the screen can be a 3D mesh. Additionally, a micro structure can be modelled based on parameters received in the first information, the microstructure may model effects such as surface roughness, surface pigments, partial transparency, fluorescence and polarization.

According to an optional variant, the second information is determined by considering the distance between the display device 1 1 and the light sources providing first information. The distance information may for example be used to determine the attenuation of the light along the path between the considered light source and the display device. The distance information may for example also be used to determine the dispersion of light between the considered light source and the display device.

According to a further optional variant, the determining of the second information is further according to a third information representative of at least a type associated with each device, the at least a type belonging to a group of types comprising light emitter, light receiver, and light sensor device. A device of the scene may have two or three types associated with it, for example a display device may be both, a light receiver and a light emitter, as it receives the light from emitting devices and as it creates light as a display device when displaying one or more images. When the display device embeds a webcam and/or a light sensor, the display device may be further of the type light sensor device'. Depending on the type associated with a device, the parameters to be considered when determining the second information may depend from the type(s) associated with this device. For example, light receiver objects such as for example furniture or electrical consumer devices may share information about their position, orientation, size, color, texture, transparency and surface reflection properties to determine at least a part of the spatial model. The type of information associated with a device may be assigned at the manufacturing stage of the device (and for example stored in a memory of the device) or may be assigned later, for example when building the network of interconnected devices of the scene 1 . The third information may determine order of processing of first information in order to determine the second information. For example, first information from objects of type "light emitter" is gathered in order to establish a first list of objects emitting light. Then, first information from objects of type "light emitter" is gathered and processed in order to obtain the second information. The third information may determine the priority of processing of first information in order to determine second information. For example, to save time and processing capacity, only first information from objects of type "light emitter" is gathered and processed together with first information from a display device 1 1 with optionally an area surrounding the display device 1 1 in order to determine the spatial modelling of the lighting of the surface encompassing this display with optionally a surrounding area.

According to a further optional variant, the second information is determined by further considering the pose of the display device 1 1 with respect to a determined point of view, for example the point of view of a viewer watching at the image(s) displayed on the display device 1 1 . The pose may be used to determine the viewing direction that may be used to determine the illumination at points of the surface of the screen of the display device for the viewer specifically, as in the example of Figure 4. By considering a point of view, the spatial model of the lighting may be restrained and streamlined to model only light going into the direction of this point of view. Another possibility would be to model light going into the direction corresponding to a point of view with higher accuracy than light going to other directions.

In a third operation 63, the image to be displayed on the display device 1 1 is processed according to the second information determined at operation 62. The processing may be done for example to remove, compensate or reduce unwanted visual effects of scene lighting on the screen of the display device 1 1 . For example, for a point of view, highlights on the screen caused by specular reflections indicated by second information are compensated by increasing the image luminance everywhere but in the region of highlights. In another example, the unwanted visual effect of a cast shadow on the screen that is indicated by second information is compensated by increasing the contrast of the image everywhere but in the region corresponding to the cast shadow. The processing may comprise modifying at least a parameter of spatial areas of the image(s) to be displayed according to the location of these areas in the image. For example, the processing applied to a part of the image may be different from the processing applied to one or more other part(s) of the image. The processing may for example correspond to a color balance to modify the at least a parameter. The at least a parameter that may be modified belongs to a group of parameters comprising:

- luminance;

- chrominance;

- saturation;

- contrast; and

- hue.

To deal with specular reflections, other required information is the 3D location of the observer. This information combined with the relative position and orientation of the screen and the relative position and orientation of the light sources, allow to identify the spatial location of highlights caused by specular reflection due to light sources on the screen as seen from the observer point of view. For example, the location on the screen of the projection of the vertices of each light source model will be computed if visible on the screen from the observer point of view.

The operations 61 to 63 may be reiterated for each image of a sequence of images, lighting conditions may vary over time.

Figure 7 diagrammatically shows a hardware embodiment of an apparatus 7 configured for process and/or transmit an image (e.g. to a display device). The apparatus 7 is also configured for the creation of display signals of one or several images. The apparatus 7 corresponds for example to a tablet, a Smartphone, a games console, a computer, a laptop or a Set-top box. The apparatus may for example correspond to the tablet 1 1 , or may be embedded in a television set 1 0 or may be comprised in the remote device 30.

The apparatus 7 comprises the following elements, connected to each other by a bus 75 of addresses and data that also transports a clock signal :

- a microprocessor 71 (or CPU),

- a graphics card 72 comprising:

• several Graphical Processor Units (or GPUs) 720,

• a Graphical Random Access Memory (GRAM) 721 ,

- a non-volatile memory of ROM (Read Only Memory) type 76,

- a Random Access Memory or RAM 77,

- a transmitter 78 configured to transmit data representative of the image, for example to a display device,

- a receiver 79 configured to receive the first information from for example the light sources and/or the light sensor(s) and/or any other data from remote devices;

- one or several I/O (Input/Output) devices 74 such as for example a tactile interface, a mouse, a webcam, etc. and

- a power source 79.

The apparatus 7 may also comprise one or more display devices 73 of display screen type directly connected to the graphics card 72 to display images calculated in the graphics card, for example live. The use of a dedicated bus to connect the display device 73 to the graphics card 72 offers the advantage of having much greater data transmission bitrates and thus reducing the latency time for the displaying of images composed by the graphics card. According to a variant, a display device is external to the apparatus 7 and is connected to the apparatus 7 by a cable or wirelessly for transmitting the display signals. The apparatus 7, for example the graphics card 72, comprises an interface for transmission or connection (not shown in figure 7) adapted to transmit a display signal to an external display means such as for example the first display device (e.g. an HMD), a LCD or plasma screen or a video-projector.

It is noted that the word "register" used in the description of memories 721 , 76, and 77 designates in each of the memories mentioned, both a memory zone of low capacity (some binary data) as well as a memory zone of large capacity (enabling a whole program to be stored or all or part of the data representative of data calculated or to be displayed).

When switched-on, the microprocessor 71 loads and executes the instructions of the program contained in the RAM 77.

The random-access memory 77 notably comprises:

- in a register 770, the operating program of the microprocessor 71 responsible for switching on the apparatus 7,

- data 771 representative of the image(s) to be processed and displayed,

- pose information 772,

- first information 773.

The algorithms implementing the steps of the method(s) specific to the present disclosure (e.g. the method of transmitting a first image and/or the method of compositing the first image) are stored in the memory GRAM 721 of the graphics card 72 associated with the apparatus 7 implementing these steps. When switched on and once the data 771 and the information 772 are loaded into the RAM 77, the graphic processors 720 of the graphics card 72 load these parameters into the GRAM 721 and execute the instructions of these algorithms in the form of microprograms of "shader" type using HLSL (High Level Shader Language) language or GLSL (OpenGL Shading Language) for example.

The random-access memory GRAM 721 notably comprises:

- in a register, data representative of the images;

- in a register, data representative of pose information ;

- in a register, data representative of the first information.

According to another variant, a part of the RAM 77 is assigned by the CPU 71 for storage of the identifiers and the distances if the memory storage space available in GRAM 721 is insufficient. This variant however causes greater latency time in the composition of an image comprising a representation of the environment composed from microprograms contained in the GPUs as the data must be transmitted from the graphics card to the random-access memory 77 passing by the bus 75 for which the transmission capacities are generally inferior to those available in the graphics card for transmission of data from the GPUs to the GRAM and vice-versa.

According to another variant, the power supply 79 is external to the apparatus 7. In an alternate embodiment, the apparatus 7 does not include any ROM but only RAM, the algorithms implementing the steps of the method specific to the present disclosure and described with regard to figures 4 and 6 being stored in the RAM. According to another variant, the apparatus 7 comprises a SSD (Solid-State Drive) memory instead of the ROM and/or the RAM. According to a further variant, the apparatus 7 does not comprise any GPU but only one or more CPUs.

Naturally, the present disclosure is not limited to the embodiments previously described.

In particular, the present disclosure is not limited to a method of processing an image but also extends to a method for displaying the processed image. The present disclosure also extends to a method and device for modelling the lighting of a scene or of a part of the scene.

The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, Smartphones, tablets, computers, mobile phones, portable/personal digital assistants ("PDAs"), and other devices that facilitate communication of information between end-users.

Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications associated with data encoding, data decoding, view generation, texture processing, and other processing of images and related texture information and/or depth information. Examples of such equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set- top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.

Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette ("CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory ("RAM"), or a read-only memory ("ROM"). The instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.

As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax-values written by a described embodiment. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.

A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.