Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DEVICE FOR INVERSE TONE MAPPING
Document Type and Number:
WIPO Patent Application WO/2019/097062
Kind Code:
A1
Abstract:
A method for processing a current view extracted from a current immersive image following at least one previous view, comprising tone mapping the current view into a processed current view at a level which is a non-decreasing function of a luminance difference between said current view and said at least one previous view and weighted by a distance between said current view and said at least one previous view. Thank to this processing of consecutive views, blinding and eye strain of viewer's eyes are avoided.

Inventors:
POULI TANIA (FR)
KERVEC JONATHAN (FR)
MORVAN PATRICK (FR)
GUERMOUD HASSANE (FR)
Application Number:
PCT/EP2018/081810
Publication Date:
May 23, 2019
Filing Date:
November 19, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERDIGITAL CE PATENT HOLDINGS (FR)
International Classes:
G02B27/01; G06T5/00; G06F3/01
Domestic Patent References:
WO2015096955A12015-07-02
Foreign References:
US20170161881A12017-06-08
EP3139345A12017-03-08
Other References:
MAI ZICONG ET AL: "Visually Favorable Tone-Mapping With High Compression Performance in Bit-Depth Scalable Video Coding", IEEE TRANSACTIONS ON MULTIMEDIA, IEEE SERVICE CENTER, PISCATAWAY, NJ, US, vol. 15, no. 7, 1 November 2013 (2013-11-01), pages 1503 - 1518, XP011529386, ISSN: 1520-9210, [retrieved on 20131011], DOI: 10.1109/TMM.2013.2266633
G. EILERTSEN ET AL: "A comparative review of tone-mapping algorithms for high dynamic range video", COMPUTER GRAPHICS FORUM, vol. 36, no. 2, 1 May 2017 (2017-05-01), GB, pages 565 - 592, XP055442404, ISSN: 0167-7055, DOI: 10.1111/cgf.13148
Attorney, Agent or Firm:
HUCHET, Anne et al. (FR)
Download PDF:
Claims:
CLAIMS

1 ) A method for processing a current view ( vt) extracted according to a current field of view from a current immersive image ( It ) following at least one previous view (v^) extracted from the same current immersive image ( /t) or from a previous immersive image ( It-i), wherein said current view is extracted within an immersive image ( /t) or ( /t-!) larger than the current view, the method comprising:

- tone mapping the current view ( vt) into a processed current view ( v't) using a compression of luminance according to a non-decreasing function of a luminance difference ( 8Lt ) between said current view ( vt) and said at least one previous processed view weighted by a distance between said current view and said at least one previous processed view Ot_i).

2) The method according to claim 1 wherein said compression of luminance is further weighted according to a coefficient ( Cmcm ) representative of a maximal compression value during movements.

3) The method for processing according to claim 1 or 2, wherein:

If said luminance difference ( 8Lt ) is positive, said tone mapping is performed such that said processed current view ( v't) has a luminance evaluation inferior to the luminance evaluation of said current view ( vt) ;

If said luminance difference ( 8Lt ) is not positive, no tone mapping is performed.

4) The method for processing according to any of claims 1 to 3, wherein each of said luminance evaluations is computed for its view as an average of luminance values of colors of pixels of this view.

5) The method for processing according to claim 4, wherein said luminance values used for computing said average does not include a given percentile of pixels of said view having the highest luminance values.

6) The method for processing according to any one of claims 1 to 5: wherein said tone mapping is performed by applying to luminance values (Yt ) of pixels of this current view vt a tone mapping function f(Yt , ct ) parametrized with a tone mapping parameter (ct), which is a non-decreasing function of said luminance value (Yt ) and of said tone mapping parameter (ct);

wherein said tone mapping parameter ( ct ) is a non-decreasing function of said luminance difference ( 5Lt ). 7) The method for processing according to claim 6 when depending on claims 4 or 5, further comprising computing (S5) said luminance difference ( 5Lt ) from a difference between said luminance evaluation (Lt) of the current view vt and said luminance evaluation(s) (Z/t_i) of a processed version(s) O i) of the at least one previous view wherein each processed version (Vt-i) of a previous view (v^) is obtained by applying said tone mapping function f(Xt- i’Ct-i) to luminance values (¾_!) of pixels of this previous view Ot_i).

8) The method for processing according to claim 6 or 7, wherein the tone mapping parameter (ct) is defined as a weighted sum of a non-decreasing function of the luminance difference ( 5Lt ) and of a luminance stabilization coefficient (wstab) representative of the stabilization of luminance evaluations over a plurality of consecutive previous views.

9) The method for processing according to any one of claims 6 to 8, wherein the tone mapping function f(Yt , ct ) is defined such that T = TtCt.

10) An image processing unit comprising at least one processor configured to process a current view ( vt) extracted according to a current field of view from a current immersive image ( It ) following at least one previous view (v^) extracted from the same current immersive image ( /t) or from a previous immersive image ( It-i), wherein said current view is extracted within an immersive image ( It ) or ( /(-!) larger than the current view, wherein the current view ( vt) is tone mapped (S7) into a processed current view ( v't) using a compression or expansion of luminance according to a non-decreasing function of a luminance difference ( 8Lt ) between said current view ( vt) and said at least one previous processed view weighted by a distance between said current view and said at least one previous processed view

1 1 ) The image processing unit of claim 10, wherein said compression of luminance is further weighted according to a coefficient ( Cmcm ) representative of a maximal compression value during movements.

12) The image processing unit of claim 10 or 1 1 , wherein:

If said luminance difference ( 8Lt ) is positive, said tone mapping is performed such that said processed current view ( v't) has a luminance evaluation inferior to the luminance evaluation of said current view ( vt) ;

If said luminance difference ( 8Lt ) is not positive, no tone mapping is performed. 13) The image processing unit according to any of claims 10 to 12, wherein each of said luminance evaluations is computed for its view as an average of luminance values of colors of pixels of this view. 14) A Head-mounted device comprising the image processing unit according to any of claims

10 to 13, a virtual image provider configured to deliver a sequence of immersive images to said image processing unit, and a display subsystem configured to display processed current view 0't) as processed by said image processing unit. 15) A non-transitory storage medium carrying instructions of program code for executing the image processing method of any one of claims 1 to 9, when said program is executed on a computing device.

Description:
METHOD AND DEVICE FOR INVERSE TONE MAPPING

1. Technical Field

The present invention relates generally to the field of tone mapping adaptation.

2. Background Art

Over the past years, High Dynamic Range (or HDR) displays have become more and more popular. They offer a new user experience as they can show images and videos with high brightness (until 4000 nits) compared to standard or Standard Dynamic Range (or SDR) displays (150-400 nits). HDR devices are able to display videos with more details in black levels and with a higher global contrast. Head-mounted display (or HMD) of high brightness are now also used to display video content with large temporal and/spatial brightness variations.

Today, in the context of virtual reality (or VR), existing head-mounted display devices reach up to 200 cd/m 2 , offering a peak luminance of more than 4 times larger than usual cinema, despite the lack of ambient lighting, which may cause visual discomfort over prolonged viewing.

In the context of immersive display, a viewer wearing a HMD is positioned within an immersive scene at the center of a 360° environment, within which he can freely rotate and move to look at a selected portion of this immersive scene, i.e. a specific view of this immersive scene. It has been commonly noticed that HMD viewers can be blinded by views presenting high peak of brightness. More specifically, transitions from a dark view to a bright view generate flash that cause eye strain for the viewers. Such eye strain transitions may happen when transitioning from a dark view of an immersive image to a bright view of the same or subsequent immersive image. Such a transition may happen due to a change of immersive image or due to a change of selected view generated by a movement of the viewer’s head. The eye discomfort is much higher as the frequency of alternating dark to bright views is higher.

Although for at least some HMDs the displayed brightness could be controlled by the viewer and set to a more comfortable level, this is not an ideal solution, since the“wow effect” that could be achieved with the higher brightness of such HMD with Higher Dynamic Range capabilities is lost.

3. Summary of Invention

In order to avoid blinding the viewer without losing the“wow effect”, it is proposed to tone map each current view at a level which is a non-decreasing function of a luminance difference between this current view and a previous one. Thanks to this tone-mapping, large, abrupt changes in brightness are avoided or reduced in strength, while peak brightness is preserved, at least after these abrupt changes.

According to the below method, the luminance of each current view is tone mapped as the viewer’s head moves, depending on the luminance of previous view(s) and optionally on the movement of the viewer within the VR scene. To achieve that, the method proposes: first determining how luminance changes between consecutive views and then computing a luminance compression or optionally, expansion, factor for processing the luminance distribution of the current view.

In a first aspect, the disclosure is directed to a method for processing a current view ( v t ) extracted according to a current field of view from a current immersive image ( / t ) following at least one previous view (v^) extracted from the same current immersive image ( / t ) or from a previous immersive image ( I t -i), wherein said current view is extracted within an immersive image larger than the current view, the method comprising tone mapping (S7) the current view ( v t ) into a processed current view ( v' t ) using a compression of luminance according to a non-decreasing function of a luminance difference ( 8L t ) between said current view ( v t ) and said at least one previous processed view weighted by a distance between said current view and said at least one previous processed view O t _i).

In a variant of first aspect, said compression of luminance is further weighted according to a coefficient ( C mcm ) representative of a maximal compression value during movements. In a variant of first aspect, if said luminance difference ( 8L t ) is positive, said tone mapping is performed such that said processed current view ( v' t ) has a luminance evaluation inferior to the luminance evaluation of said current view ( v t ) and if said luminance difference ( 5L t ) is not positive, no tone mapping is performed. In a variant of first aspect, each of said luminance evaluations is computed for its view as an average of luminance values of colors of pixels of this view. In a variant of former variant, said luminance values used for computing said average does not include a given percentile of pixels of said view having the highest luminance values. In a variant of first aspect, said tone mapping (S7) is performed by applying to luminance values (Y t ) of pixels of this current view v t a tone mapping function f(Y t , c t ) parametrized with a tone mapping parameter (c t ), which is a non-decreasing function of said luminance value (Y t ) and of said tone mapping parameter (c t ); wherein said tone mapping parameter ( c t ) is a non-decreasing function of said luminance difference ( 5L t ). In a variant of first aspect, said luminance difference ( 5L t ) is computed from a difference between said luminance evaluation (L t ) of the current view v t and said luminance evaluation(s) (L' t _ i) of a processed version(s) O i) of the at least one previous view O t _i), wherein each processed version O i) of a previous view ( v ) is obtained by applying said tone mapping function to luminance values (V^) of pixels of this previous view

O t.! ). In a variant of first aspect, the tone mapping parameter (c t ) is defined as a weighted sum of a non-decreasing function of the luminance difference ( 8L t ) and of a luminance stabilization coefficient (w stab ) representative of the stabilization of luminance evaluations over a plurality of consecutive previous views. In a variant of first aspect, the tone mapping function f(Y t , c t ) is defined such that

In a second aspect, the disclosure is directed to an image processing unit comprising at least one processor configured to process a current view ( v t ) extracted according to a current field of view from a current immersive image ( / t ) following at least one previous view (v t-t ) extracted from the same current immersive image ( I t ) or from a previous immersive image ( I t -i), wherein said current view is extracted within an immersive image ( / t ) or ( / t-! ) larger than the current view, the process comprising tone mapping (S7) the current view ( v t ) into a processed current view ( v' t ) using a compression of luminance according to a non-decreasing function of a luminance difference ( 8L t ) between said current view ( v t ) and said at least one previous processed view weighted by a distance between said current view and said at least one previous processed view O t _i).

In a variant of second aspect, said compression of luminance is further weighted according to a coefficient (C mcm ) representative of a maximal compression value during movements. In a variant of second aspect, if said luminance difference ( 8L t ) is positive, said tone mapping is performed such that said processed current view ( v' t ) has a luminance evaluation inferior to the luminance evaluation of said current view ( v t ) and if said luminance difference ( 8L t ) is not positive, no tone mapping is performed. In a variant of second aspect, each of said luminance evaluations is computed for its view as an average of luminance values of colors of pixels of this view. In a variant of former variant, said luminance values used for computing said average does not include a given percentile of pixels of said view having the highest luminance values. In a variant of second aspect, said tone mapping (S7) is performed by applying to luminance values (Y t ) of pixels of this current view v t a tone mapping function f(Y t , c t ) parametrized with a tone mapping parameter (c t ), which is a non-decreasing function of said luminance value (Y t ) and of said tone mapping parameter (c t ); wherein said tone mapping parameter ( c t ) is a non-decreasing function of said luminance difference ( 5L t ). In a variant of second aspect, said luminance difference ( 5L t ) is computed from a difference between said luminance evaluation (L t ) of the current view v t and said luminance evaluation(s) (L' t _i) of a processed version(s) O i) of the at least one previous view O t _i), wherein each processed version 0' t _i) of a previous view O^) is obtained by applying said tone mapping function f{Y t -i, c t - ) to luminance values (Y^) of pixels of this previous view O t.! ). In a variant of second aspect, the tone mapping parameter (c t ) is defined as a weighted sum of a non-decreasing function of the luminance difference ( 8L t ) and of a luminance stabilization coefficient (w stab ) representative of the stabilization of luminance evaluations over a plurality of consecutive previous views. In a variant of second aspect, the tone mapping function f(Y t , c t ) is defined such that 7 = Y t Ct . In a third aspect, the disclosure is directed to a head-mounted device comprising the image processing unit according to the second aspect, a virtual image provider configured to deliver a sequence of immersive images to said image processing unit, and a display subsystem configured to display processed current view ( v' t ) as processed by said image processing unit.

In a third aspect, the disclosure is directed to a non-transitory storage medium carrying instructions of program code for executing the image processing method above, when said program is executed on a computing device.

In the context of this application, the terms“immersive image” and“omnidirectional image” are considered as synonyms.

4. Brief description of the drawings

The invention can be better understood with reference to the following description and drawings, given by way of example and not limiting the scope of protection, and in which:

Fig.1 discloses a main embodiment of processing a current view v t extracted from a current immersive image I t following a previous view v t-1 into a processed view v' t

Fig. 2 discloses the orientation and position of a current view v t , a previous view v t-1 : and of a following view v t+1 of the surface of a sphere of an immersive image, respectively at a time t-1, t, and t+1, as a part extracted from this immersive image, with their respective luminance evaluation measures;

Fig. 3 illustrates a first variant of a tone mapping parameter function, where“luminance change” corresponds to luminance difference SL t .

Fig. 4 illustrates a second variant of a tone mapping parameter function, where“luminance change” corresponds to luminance difference SL t .

5. Description of embodiments

While example embodiments are capable of various modifications and alternative forms, embodiments thereof are described below in details by way of examples. It should be understood, however, that there is no intent to limit example embodiments to the particular forms disclosed, but on the contrary, example embodiments are to cover all modifications, equivalents, and alternatives falling within the scope of the claims.

Although the description and drawings describe the different steps of the method as sequential, many of the steps may be performed in parallel, concurrently or simultaneously. In addition, the order of operations may be re-arranged. The processes may be terminated when their operations, including iterations, are completed, but may also have additional steps not included in the description.

The different steps of the method discussed below, with their variants, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. A processor(s) may perform the necessary tasks. Specific structural and functional details disclosed herein are merely representative for purposes of describing example embodiments of the present invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.

A main embodiment of the invention will now be described in the context of displaying a sequence of immersive images with a head-mounted display device (or HMD) worn by a viewer who can change his orientation within the virtual 360° environment of each immersive image of the sequence such as to select a specific view of this immersive image. The head-mounted display device comprises:

A display subsystem to render views to be viewed by the viewer;

An image processing unit having at least one processor configured in a manner known per se to implement at least the image processing method embodied below;

An immersive image provider configured to deliver immersive images in sequence to the image processing unit; this immersive image provider can be for instance a VR video memory or a VR video receptor; this video receptor may receive - for instance by WiFi connection - immersive images from an emitter included, for instance, in a gateway. Orientation sensors configured in a manner known per se to determine the orientation of the viewer’s head within a current immersive image (which is here a virtual 360° environment) and to deliver this orientation to the image processing unit. This orientation is generally represented by spherical coordinates f ,l , i.e. a latitude angle f and a longitude angle l , defining the orientation of the viewer’s head within the immersive image.

Here, immersive images are spread over 360°. Immersive images spread over smaller angles may be considered too.

Each view v displayed by the display subsystem and viewed by the viewer at a time t is a portion of a full 360° current immersive image, centered at a viewpoint of eyes of the viewer on this current immersive image.

As shown on figure 2:

- The current view v t is centered within a current immersive image I t at spherical coordinates displayed at time t;

- The previous view v t-1 is centered within an immersive image I at spherical coordinates displayed at time t - 1.

- The following view v t+1 is centered within an immersive image I t+1 at spherical coordinates 0 t+1 ,2 t+1 and is displayed at time t + 1.

The immersive image I and immersive image / t+1 may be different from the current immersive image I t or identical to this current immersive image I t . The immersive image I t could notably be identical to the previous immersive image / t-! if, for example, the content does not change between time t-1 and time t. (i.e. static content). The view v t could notably be identical to the previous view v t-1 if, for example, the content does not change AND the viewer does not move his head between time t-1 and time t.

A main embodiment of the processing of a current view v t extracted from a current immersive image I t following a previous view v t-1 extracted from the same current immersive image I t or from a previous current immersive image I will now be described in reference to the flowchart of figure 1. Preferably, at least each time a view needs to be refreshed on the display subsystem, the below steps S1 to S8 or S2 to S8 are implemented. A view needs notably to be refreshed when a change of orientation of the viewer’s head is detected by the orientation sensors and/or when a new immersive image is provided to the image processing unit.

In a first optional step S1 of the image processing method, a new current immersive image I t is received by the image processing unit from the immersive image provider. If no new immersive image I t is available in the timing sequence of immersive images, then the process skips directly to the second step S2.

In a second step S2 of the image processing method, orientation of the viewer’s head within the current immersive image I t is received by the image processing unit from the orientation sensors. This orientation is represented by spherical coordinates f ΐ ,l ΐ . A current view v t is extracted in a manner known per se from the current immersive image I t based on the position of a viewpoint of the viewer on this current immersive image I t . This position is generally determined by the intersection of the received orientation with the current immersive image I t . For such an extraction, known projection formulas for projecting between equirectangular format and gnomonic format are generally used.

When immersive images are spherical images, views are preferably extracted as gnomonic or rectilinear views. As known from prior art, gnomonic or rectilinear refers to the projection used to extract a (smaller) view from a spherical immersive image. It refers to the projection used to project the spherical surface of the image to a plane tangent to this spherical surface. In a third step S3, a luminance evaluation L t of this extracted current view v t is computed from colors of pixels of this extracted current view v t .

This evaluation is notably based on the luminance T t (i) of different pixels i of this current view v t . Luminance T t (i) of a pixel / is computed for instance as a weighted combination of the color components representing the color of this pixel i in the color space of the HMD, such as to perform a color space conversion from this color space to, for instance, the Yuv color space. Luminance values of these pixels are preferably normalized in the interval [0, 1]. As a whole, this luminance evaluation L t is then computed from colors of pixels of the current view v t .

In a first variant, the luminance evaluation L t of the extracted current view v t is computed as the average luminance over all pixels of the current view v t . This average luminance can be computed as:

where N is the number of pixels in the view v t .

In order to save computing resources, in a second variant, this luminance evaluation L t is computed in lower resolution versions of the view v t , such that all pixels of the full resolution version are not considered for this evaluation, but only a lower number of pixels homogeneously distributed over the whole view v t .

The first and second variants above may not be considered as a good evaluation of how bright the brightest features of the view v t are since a view with very bright and very dark areas may lead to the same average luminance as a view with just midtones. It should be keep in mind that the bright parts in the first case would influence the comfort of the viewer even though the average luminance remains the same.

When dealing with HDR images/views, it often occurs that their luminance distribution exhibits a long tail, covering only a small number of pixels but ones that have very high values. If these pixels are considered in calculations of luminance evaluation of a view, they often bias the results. Therefore, in a third variant, luminance values used for computing the average does not include a given percentile of pixels having the highest luminance values. Preferably, this given percentile is above zero but inferior or equal to 5%. This could be done to avoid the luminance evaluation of a view being overly biased by small highlights/specularities. The idea behind that would be that, if there is a small specularity present, it may be preferred not to compress the whole view just to avoid having a very small bright part. On the opposite, if however there is a significant number of bright pixels in the image, i.e. because there are large bright areas in the view, even by ignoring the top luminance given percentile to compute the average luminance, high luminance values will still be captured in the average. In this third variant, only a nth percentile inferior to 100% of the luminance distribution of the current view v t is considered to compute the average luminance representing a luminance evaluation of this view. For example, if nth = 95th, the average luminance L t would be set to a luminance value such that 95% of the image pixels have lower luminance. This approach would lead to a more accurate representation of what happens at the brighter part of the view, while being relatively robust against very small highlights areas in the view.

The computing of luminance evaluation L t of the extracted current view v t is not limited to the above methods and other luminance evaluation L t computing methods could be used instead.

When luminance values of pixels are normalized in the interval [0, 1], luminance evaluations are also in the same range [0, 1].

Similarly, in a fourth step S4, a luminance evaluation L' t-1 of a processed version v' t-1 of the previous view v t-1 is similarly computed from luminance values of pixels of this processed previous view v'^. This computing of the luminance evaluation L’ t-1 is performed using the same method as in step S3. The processed version v' t-1 of the previous view v t-1 is obtained, generally in a step S6 and S7 (see below) of a previous iteration of the method, by applying the tone mapping function of step S6 and S7 to luminance values Y t-1 of pixels of the previous view v t-1 .

In a fifth step S5, a luminance difference 5L t is computed from a difference between the luminance evaluation L t of the current view v t and the luminance evaluation L' t-1 of the processed previous view Preferably, this computing is performed according to:

When luminance evaluations L t and L' t-1 are normalized in the range [0, 1], the range of values of the luminance change 5L t is then in the range [-1 , 1].

In a sixth step S6, a tone mapping parameter c t is computed in function of the luminance difference 5L t computed at step S5, such as to parametrize a tone mapping function f(Y t , c t ) to be applied (see step S7 below) to the luminance Y t of a pixel i of the current view v t to get a tone mapped luminance 7 of the corresponding pixel such as to build a tone mapped current view v' t . In this embodiment, the expression “tone mapping” may concern a luminance compression or a luminance expansion. It means that“tone-mapping” includes“inverse-tone- mapping”.

Preferably, the tone mapping parameter c t is defined as a non-decreasing function of the luminance difference 5L t . Preferably, this non-decreasing function is a monotonically increasing function, having then a first derivative over the luminance difference 5L t which is continuous and positive or null over its range.

The tone mapping function f(Y t , c t ) may be an exponential tone mapping function such that Y f = Y^ . Any other known parametrized tone mapping function can be used instead. Several examples of definition of tone mapping parameters c t will be given below in the specific context of an exponential tone mapping function 7 = Y t Ct .

In a first variant, the tone mapping parameter c t is defined as follows as a piecewise linear function of the luminance difference 5L t \

where the parameter c max defines the strongest allowed compression factor desired, which can be for instance set to a value higher than 1 , e.g. c max = 2. Higher values for c max will lead to a more aggressive compression of luminance. The tone mapping parameter function defining c t in this variant is illustrated on figure 3 for different values of luminance change 5L t .

In a second variant, the tone mapping parameter c t is defined as follows as a sigmoid function of the luminance difference 5L t \

where ci max — 2 x (c max _ i) and m ?i = 2 x ( c min-1 ) are parameters controlling the maximum and minimum allowed luminance compression/expansion. In this variant, a separate parameter is preferred for defining separately the maximum allowed compression and the maximum expansion of luminance. In this situation, the sigmoid function is not symmetric, but rather consists of two connected half-sigmoid functions;

where parameters b min and b max control the slope of each half-sigmoid function. A value of -15 for both of these parameters was found to give satisfactory results but could be set to different values for different VR HMDs or depending on the preferences of the user.

The tone mapping parameter function defining c t in this variant is illustrated on figure 4 for different values of luminance change 5L t .

In both variants above, the tone mapping parameter c t is actually a non-decreasing function of the luminance difference 5L t .

In a third variant, the tone mapping parameter c t is defined as a weighted sum of a nondecreasing function of the luminance difference 5L t and of a luminance stabilization coefficient w stat> representative of the stabilization of luminance evaluations - as computed above - over a plurality k of consecutive previous views including the previous view v t- . The non-decreasing function of the luminance difference 5L t is for instance computed according to the first or second variant above (respectively equation 3 and 4).

Preferably, this tone mapping parameter c t is then computed according to:

ct— (1 — w sta b ) c t T W staij (5)

where c' t corresponds to c t in equations 3 or 4, with values of w stab normalized in the interval [0,1]· For instance, the luminance stabilization coefficient w stab is computed according to: where n views represents the number of consecutive views at which it is considered that luminance evaluations are fully stabilized (=> w stab = 1 ) if they have not changed above a luminance difference threshold T SL ; n views can be set to a small value, e.g. 3;

where k is a count of consecutive views, including the previous view v t-t , over which luminance evaluations have not changed above the luminance difference threshold t di .

This count k is implemented in a manner known per se for instance for an incrementation when change of luminance evaluations between two consecutive views is not above the luminance difference threshold T SL or for a reset to 0 when change of luminance evaluations between two consecutive views is above the luminance difference threshold t di .

Advantageously, this third variant of computing a tone mapping parameter c t allows to get a tone mapped current view v' t which is lower tone-mapped (and possibly non tone-mapped at all) when the luminance evaluation stabilizes over consecutive views. Advantageously, this third variant allows to progressively reduce the compression or expansion applied to consecutive views during the stabilization of luminance evaluation up to finally keep the original current view (i.e. up to apply in step S7 a tone mapping function f(Y t , c t ) equal to 1 ) when the luminance evaluation is fully stabilized. This may be desirable especially for content that has been color graded or prepared with a particular artistic intent. In such cases, tone-mapping becomes a tradeoff between ensuring visual comfort and preserving the original artistic intent.

In a fourth variant, the tone mapping parameter c t is defined as a non-decreasing function of the luminance difference 5L t weighted by a coefficient C mcm that determines the amount of maximal compression desired during movements. The coefficient C mcm can be defined by the manufacturer or by the user according to its preferences. The non-decreasing function of the luminance difference 5L t is for instance computed according to any of the first to third variants above (respectively equation 3, 4 and 5).

Preferably, this tone mapping parameter c t is then computed according to: where c' t corresponds to c t in equations 3, 4 or 5,

where s t represents the distance travelled on the VR sphere between the current view v t and the previous view v t-1 , where s max represents a maximum distance travelled on the VR sphere between two successive views; s max can be defined by the HMD device manufacturer

where C mcm > 0, preferably 0 < C mcm £ 0.5 .

This fourth variant of computing a tone mapping parameter c t allows to get a tone mapped current view v' t which is more tone-mapped when the transition from the previous view v t-1 to the current view v t results from a quick movement of the viewer’s head than when this transition results from a traveling of the immersive image by moving slowly the viewer’s head. If the viewer moves fast, the intermediate views that pass while the viewer’s head is in motion are not viewed in detail by the viewer and will be more tone mapped because the tone mapping parameter c t will be higher. In contrast, if the viewer moves slowly, the intermediate views that pass while the viewer’s head is in motion are likely to be observed in more detail and will be less tone mapped because the tone mapping parameter c t will be lower. In the first case, since the intermediate views are not the focus of the viewer, a stronger tone mapping could be applied, better ensuring viewing comfort and avoiding rapid changes in luminance. In the second case, the tone mapping should likely remain more conservative to ensure that the artistic intent of each view is better preserved, even if visual comfort is sacrificed somewhat.

The person skilled in the art understands that the considerations regarding the movement speed directly translate to distance between successive views. Indeed, the frame rate is conventionally fixed so that these two parameters can be considered as being equivalent: a large distance between two consecutive views is equivalent to a fast movement while a small distance between two consecutive views is equivalent to a slow movement.

In a seventh step S7, the current view v t is processed into a processed current view v' t by applying to luminance values Y t of pixels of this current view v t the tone mapping function f(Y t , c t ) parametrized with the value of the tone mapping parameter c t computed at the previous step S6; therefore, luminance values T/ of pixels of this processed current view v' t are such that Yt = f(Xt , c t ) (5).

Preferably, the tone mapping function f(Y t , c t ) is defined as a non-decreasing function of the luminance Y t and of the tone mapping parameter c t . Preferably, this non-decreasing function is a monotonically increasing function, having then a first derivative over the luminance Y t and over the tone mapping parameter c t which is continuous and positive or null over its range.

Preferably, the variation of the tone mapping parameter c t in function of the luminance difference 5L t and the definition of the tone mapping function f(Y t , c t ) are preferably set according to the following requirements:

If 8L t > 0, i.e. when the luminance evaluation L t of the current view v t has increased relative to the luminance evaluation L' t-1 of the processed previous view v' t-1 : the current view v t is processed into a processed current view v' t having a luminance evaluation L' t inferior to the luminance evaluation L t of said current view v t ;

If 5L t £ 0, i.e. when the luminance evaluation L t of the current view v t has decreased or not changed relative to the luminance evaluation L' t-1 of the processed previous view v' t-1 : the current view v t is not changed and v' t = v t .

Preferably, the tone-mapping function is exponential according to: Y t ' =

(6).

In the first variant above of step S6, when the luminance decreases and generates a negative luminance difference (i.e. 5L t £ 0), no mapping of luminance is performed (i.e. c t = 1). In the second variant above of step S6, luminance values of pixels of the current view are expanded (i.e. c t = 0.5 )if luminance evaluation has decreased relative to the previous view (i.e. 5L t £ 0). This second variant would still prevent large changes in luminance between consecutive views but would advantageously keep the overall luminance levels of the content higher.

In an eighth step S8, the processed current view v' t is sent in a manner known per se to the display subsystem in order to display it. Before sending it, when required by the display subsystem, colors of this processed current view v' t are converted back from the Yuv color space into the RGB color space of the HMD. This color space conversion is inverse of the color space conversion performed at step S3 above.

Then, each time a view needs to be refreshed on the display subsystem, the above steps S1 to S8 or S2 to S8 are implemented. A view needs notably to be refreshed when a change of orientation of the viewer’s head is detected by the orientation sensors and/or when a new immersive image is provided to the image processing unit.

According to the above method, in the viewing path of the viewer in a VR scene, the luminance of each current view is tone mapped depending on the luminance of previous view(s) and optionally on the movement of the viewer’s head within the VR scene. Each view displayed to the viewer is then tone-mapped at a level which is a non-decreasing function of a luminance difference between this current view and a previous one. Thank to this processing of consecutive views, blinding and eye strain of viewer’s eyes are avoided without losing the“wow” effect that can be obtained by a HDR HMD.

Although some embodiments of the present invention have been illustrated in the accompanying Drawings and described in the foregoing Detailed Description, it should be understood that the present invention is not limited to the disclosed embodiments, but is capable of numerous rearrangements, modifications and substitutions without departing from the invention as set forth and defined by the following claims.