Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
ENERGY-SAVING MULTI-ZONE DISPLAY
Document Type and Number:
WIPO Patent Application WO/2022/182887
Kind Code:
A1
Abstract:
A display configured to display video content to a plurality of viewing zones in proximity to the display includes at least one hardware processor configured determine, using information received from at least one sensor, user presence in the viewing zones. Luminance produced for presentation to a first set of the viewing zones that are determined to have no user presence is reduced, and luminance corresponding to the displayed video content for presentation to a second set of the viewing zones that are determined to have user presence is maintained.

Inventors:
REDMANN WILLIAM (US)
STEIN ALAN (US)
REINHARD ERIK (FR)
Application Number:
PCT/US2022/017722
Publication Date:
September 01, 2022
Filing Date:
February 24, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
INTERDIGITAL PATENT HOLDINGS INC (US)
International Classes:
H04N13/302; G02B30/27; G02B30/33; H04N13/305; H04N13/32; H04N13/366
Domestic Patent References:
WO2011001372A12011-01-06
Foreign References:
US20110310233A12011-12-22
EP2797328A12014-10-29
US20140375791A12014-12-25
US10854171B22020-12-01
Other References:
AKIRA KUBOTA ET AL.: "Multiview Imaging and 3DTV", IEEE SIGNAL PROCESSING MAGAZINE, 2007
Attorney, Agent or Firm:
SPICER, Andrew, W. (US)
Download PDF:
Claims:
CLAIMS

1. A display configured to display video content to a plurality of viewing zones in proximity to the display, the display including at least one hardware processor configured to: determine, using information received from at least one sensor, user presence in the viewing zones; and reduce luminance produced for presentation to a first set of the viewing zones that are determined to have no user presence and maintain luminance corresponding to the displayed video content for presentation to a second set of the viewing zones that are determined to have user presence.

2. The display of claim 1 , further comprising a plurality of sets of pixels, each set of pixels configured to display corresponding image information of the video content and project the corresponding image information into a corresponding viewing zone.

3. The display of claim 2, wherein the corresponding image information is different between at least two of the sets of pixels.

4. The display of claim 2, wherein the corresponding image information for at least two of the sets of pixels is not the same.

5. The display of claim 1 , wherein user presence in a viewing zone is determined by at least one of: a user being physically present in the viewing zone; a user’s head being present in the viewing zone; at least one eye of a user being present in the viewing zone; and at least one eye of a user watching the screen.

6. The display of claim 1 , wherein the luminance is reduced by dimming.

7. The display of claim 1 , wherein the luminance is reduced immediately.

8. The display of claim 1 , wherein the viewing zones are arranged vertically.

9. The display of claim 1 , further comprising a lenticular array configured such that each of the viewing zones is served by a subset of pixels.

10. The display of claim 1 , wherein the video content is displayed using OLED display technology.

11. The display of claim 1 , wherein the video content is displayed using LED display technology, and wherein the luminance is reduced by dimming or extinguishing strips of backlight that correspond to the first set of viewing zones.

12. The display of claim 1 , the at least one processor being further configured to: detect a change in user presence; determine that, as a result of the change in user presence, a subset of the second set of the viewing zones has no user presence; and delay the luminance reduction corresponding to the change in user presence for the subset of the viewing zones.

13. A method of controlling a display, the method comprising: determining, using information received from at least one sensor, user presence in a plurality of viewing zones in proximity to the display, the display configured to display video content to the viewing zones; and reducing luminance produced for presentation to a first set of the viewing zones that are determined to have no user presence and maintaining luminance corresponding to the displayed video content for presentation to a second set of the viewing zones that are determined to have user presence.

14. The method of claim 13, wherein the display comprises a plurality of sets of pixels, each set of pixels configured to display corresponding image information of the video content and project the corresponding image information into a corresponding viewing zone.

15. The method of claim 14, wherein the corresponding image information for each set of pixels is the same.

16. The method of claim 14, wherein the corresponding image information is different between at least two of the sets of pixels.

17. The method of claim 13, wherein user presence in a viewing zone is determined by at least one of: a user being physically present in the viewing zone; a user’s head being present in the viewing zone; at least one eye of a user being present in the viewing zone; and at least one eye of a user watching the screen.

18. The method of claim 13, wherein the luminance is reduced by dimming.

19. The method of claim 13, wherein the luminance is reduced immediately.

20. The method of claim 13 wherein the display uses LED display technology, and wherein the luminance is reduced by dimming or extinguishing strips of backlight that correspond to the first set of viewing zones.

21. The method of claim 13, further comprising: detecting a change in user presence; determining that, as a result of the change in user presence, a subset of the second set of the viewing zones has no user presence; and delaying the luminance reduction corresponding to the change in user presence for the subset of the viewing zones.

22. A non-transitory computer-readable storage medium storing instructions that, when executed, cause at least one hardware processor to perform the method of claim 13.

Description:
ENERGY-SAVING MULTI-ZONE DISPLAY

TECHNICAL FIELD

The present disclosure relates generally to displays and in particular to energy saving mechanisms for displays.

BACKGROUND

This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.

There is a general tendency to ecological awareness that among other things leads to an on-going trend towards energy-saving in electronic devices such as displays.

Many displays provide an energy-saving feature that reduces the power consumption of the display. For example, conventional televisions typically include a feature that switches off the television after a given amount of time without user interaction. The switched off state (also called for example hibernation mode or sleep mode) can be the same as if the user had switched off the television using the remote control. An example of such a television is described in US Patent Application Publication No. 2014/0375791.

A problem with such a solution is that the television is switched off almost entirely, including sound, which means that a user that for example uses the television as a kind of radio while doing something - baking, painting or cooking - in another room, may be deprived of the sound. l It will thus be appreciated that there is a desire for a solution that addresses at least some of the shortcomings of power-saving displays. The present principles provide such a solution.

SUMMARY OF DISCLOSURE

In a first aspect, the present principles are directed to a display configured to display video content to a plurality of viewing zones in proximity to the display. The display includes at least one hardware processor configured to determine, using information received from at least one sensor, user presence in the viewing zones. Luminance produced for presentation to a first set of the viewing zones that are determined to have no user presence is reduced and luminance corresponding to the displayed video content for presentation to a second set of the viewing zones that are determined to have user presence is maintained.

In a second aspect, the present principles are directed to a method of controlling a display includes determining, using information received from at least one sensor, user presence in a plurality of viewing zones in proximity to the display. The display is configured to display video content to the viewing zones. Luminance produced for presentation to a first set of the viewing zones that are determined to have no user presence is reduced and luminance corresponding to the displayed video content for presentation to a second set of the viewing zones that are determined to have user presence is maintained.

In a third aspect, the present principles are directed to a computer program product which is stored on a non-transitory computer readable medium that includes program code instructions executable by a processor for implementing the steps of a method according to any embodiment of the second aspect.

BRIEF DESCRIPTION OF DRAWINGS

Features of the present principles will now be described, by way of non-limiting example, with reference to the accompanying drawings, in which: Fig. 1 illustrates a display according to an example embodiment of the present principles;

Fig. 2 illustrates an example of a multi-zone display with viewing zones; and Fig. 3 illustrates a method according to an embodiment of the present principles.

DESCRIPTION OF EMBODIMENTS

Fig. 1 illustrates a display 100 according to an example embodiment of the present principles. The display 100 includes typical display components such as at least one hardware processor 110, memory 120, a communications interface 130, a user interface 140, a screen 150 and at least one sensor 160.

The at least one hardware processor 110 is configured to control the display 100, which includes executing program code instructions to perform a method according to the present principles.

The memory 120, which can be at least partly non-transitory, is configured to store the program code instructions to be executed by the at least one processor 110, parameters, image data, intermediate results and so on.

The communication interface 130 is configured for communication with external devices, for example to receive content and other information for display on the screen 150. The communication interface can implement any suitable technology, wired or wireless or a combination of the two.

The user interface 140 is configured to receive signals from a user interface unit, such as a conventional remote control or a touch interface implemented in the screen 150.

The screen 150 is configured to display images provided by the hardware processor 100. The display 100 is a “multi-zone display”, which means that the display in general, and the screen 150 in particular, implements a directional display technology that presents an image only to a corresponding viewing zone of a plurality of viewing zones addressable by the screen 150, where the viewing zones are located in front of the screen (see Fig. 2); an image displayed to a viewing zone is typically only directly viewable by a user in the viewing zone but not by users in other viewing zones.

A multi-zone display screen 150 can be of static configuration, where an image is selectably directed to a viewing zone on the basis of how it is delivered to the screen 150 from the hardware processor 100. For example, a hardware processor might deliver an image such that it is distributed over a number of columns of pixels of screen 150, where each of those columns align with an associated lenticle so as to present those pixels, and thereby the image, to a particular viewing zone and not to other viewing zones addressable by screen 150. Thus, when used with a plurality of images being directed to corresponding viewing zones simultaneously, this would be a static configuration that is spatially multiplexed.

Alternatively, a multi-zone display screen 150 can be of dynamic configuration, where an image is selectably directed to a zone on the basis of a directional signal (not shown) provided to the screen 150. Thus, when used with a plurality of images being directed to different viewing zones, a dynamic configuration would be spatially or temporally multiplexed, i.e. , different sets of screen pixels might each correspond to only one image (spatially multiplexed), or screen pixels could be rapidly switched in synchronism with the directional signal to correspond to different images (temporally multiplexed).

Different content items can be displayed in different directions and can typically not be viewed at the same time by a single eye; the multi-zone display is thus different from for example Picture-in-Picture technology whose aim is to enable a user to view more than one simultaneously displayed contents.

It will be appreciated that multi-zone displays can be based on conventional multi-view (i.e., multi-user) displays, which are typically constructed by placing a lenticular array in front of a high-resolution screen, so that certain pixels project to certain direction so as to be visible only from the corresponding zone, in a static configuration. One typical use of conventional multi-view displays is to provide a 3D experience to users (i.e., viewers, content consumers) by projecting one version of the content to one eye and a complementary version of the content to the other eye; see e.g., Akira Kubota et al. , Multiview Imaging and 3DTV, IEEE Signal Processing Magazine, 2007. Another use of conventional multi-view displays, described in US Patent No. 10,854,171 , is to display different content to different users in front of the display. As the technology of multi-view displays is well known in the art, it will not be described in detail.

The at least one sensor 160, e.g., a camera, is configured to detect user presence in front of the screen 150, as will be further described. In an embodiment, user presence is conditioned by presence of a user’s head, face or eyes. In an embodiment, user presence is further conditioned by detection of a user activity, for example if the user is watching the screen 150 (user presence), which can be determined by gaze detection, or if the user is asleep, looking away (e.g., for a given period of time), or occupied with tasks such as reading or surfing on a tablet (user absence). The at least one sensor 160 can be included in or separate from the display 100.

Well-known computer vision techniques can be used by the at least one sensor 160 and/or the at least one hardware processor 110 to detect, recognise and identify users. It is noted that in order to reduce the energy requirements, the rate at which the user detection is performed may be much lower than the frame rate of the screen 150.

A non-transitory storage medium 170 stores program code instructions to be executed by for example the at least one processor 110 to perform at least one embodiment of a method of the present principles.

The at least one hardware processor 110 is configured to reduce the energy consumption of the display 100 by selectively reducing the luminance of certain pixels of the screen 150 corresponding to a specific zone or zones. In other words, the at least one hardware processor 110 can cause a reduction in luminance by providing instructions to this end to relevant components that can depend on for example the display technology. The luminance of other pixels can be maintained, i.e. , not reduced. The reduction can include at least one of dimming and switching off altogether. The reduction can be different for different pixels. Broadly speaking, the luminance of pixels that are not seen and/or cannot be seen by a user can be reduced. The reduction can be gradual or at least essentially immediate.

To this end, the viewing area in front of the display 100 can be partitioned into viewing zones that radiate from the display 100. Fig. 2 illustrates an example of a multi zone display, in this case a television 210, with six viewing zones, Zone 1 - Zone 6. In the example of Fig. 2, a user 220 is located in Zone 3 and is thus able to view content projected into this zone (Zone 3), but not, at least not directly, content projected into the other zones. It should be understood that there may of any number of zones or partitions with respect to the display 210 and that the zones need not be evenly sized or distributed. The user 220 being located in Zone 3 is by way of example only and should not be considered limiting.

In this example embodiment, the lenticular array is configured such that each viewing zone is served by a subset of the pixels of the screen 150. When a user is present in a particular viewing zone, the corresponding pixels are used to display an image, whereas the luminance is reduced for the pixels corresponding to viewing zones that, for example, are empty or in which a would-be viewer is looking away from the screen 150. In the example illustrated in Fig. 2, the pixels serving zones 1 , 2 and 4-6 could all be dimmed or switched off (i.e. , extinguished), the latter yielding an average energy reduction of the display panel by more than 80%.

It is noted that a single viewer may be present in more than one viewing zone, multiple users may be present within a single zone, and a plurality of users (not shown in Fig. 2) may be spread out over a plurality of viewing zones. In this case, the pixels of a plurality of viewing zones (consecutive or not) can be activated, while the viewing zones without user presence can be dimmed or switched off.

In an embodiment, the screen 150 has lenticles that run vertically, leading to horizontally arrayed viewing zones, varying in azimuth relative to the screen 150. The subset of pixels addressing a particular viewing zone are then vertical runs of pixels with a different horizontal offset relative to the corresponding lenticle centerline so that the left and right eyes of an upright viewer can occupy different zones and thereby perceive a stereoscopic image. Such an arrangement of viewing zones is suitable for the present principles; it is noted that the viewing zones can be larger than would otherwise be useful for viewing 3D.

In an embodiment, not suited for viewing 3D, a multi-zone display is configured with lenticles running horizontally, such that the viewing zones are arrayed vertically, varying in angular elevation relative to the display screen. This allows selection of the viewing zones that direct light at the height of a viewer’s eyes at the corresponding viewing distance, while pixels in viewing zones not visible by the viewer can be dimmed or switched off.

The latter embodiment can be particularly efficient for a group of viewers, as the variation in eye height among a group of seated viewers is expected, although this depends on the width and arrangement of the viewing zones, to span fewer vertically arrayed zones than the number of horizontally arrayed zones required for the same group. As an example, for individuals whose seated eye height falls within the same viewing zone, the number of vertical viewing zones that need to be active does not increase with additional viewers while, on the other hand, there may be a need to activate at least one additional horizontal viewing zone for each added user.

In an embodiment, rather than a lenticular array, the screen 150 includes an array of fly-eye lenses where the viewing zones are arrayed in both azimuth and elevation. How such viewing zones are arranged, e.g., rectangularly or hexagonally, is determined by the array pattern of the fly-eye lenses. Each viewing zone is still served by a subset of the pixels, where the pixels addressing a particular viewing zone through the lenses of the fly-eye array are determined by a predetermined radius and angle (or the equivalent horizontal and vertical offsets) relative to each lens’ optical centerline, which may vary across the display.

In an embodiment, the screen (150) comprises a plurality of projectors (not shown) aiming at the screens such that their projected image is either refracted by transparent lenticles or reflected by mirrored lenticles. In this embodiment, the reduction in the luminance is accomplished by a reduction in the luminance of the corresponding projector. The screen 150 can use Organic Light-Emitting Diode, OLED, display technology. In this case, each pixel that is dimmed or switched off reduces the energy consumption that is directly correlated to the luminance of each pixel.

The screen 150 can be a Light-Emitting Diode, LED, display with a spatially varying (local dimming) backlight, which may be an edge light. The backlight may be configured in individually controllable strips such that each strip has an alignment with at least one lenticular lens and corresponds to a directional viewing zone in front of the television. In this case, a reduction of energy consumption may be achieved by dimming or extinguishing the strips of backlight that correspond to viewing zones without user presence, i.e. , where no viewers are present and/or are not facing the display.

When detecting user presence, the display 100 can determine the viewing zones occupied by one or more users. In an embodiment, a person detected by the at least one sensor 160 is determined to occupy one or more viewing zones by estimating the azimuth and range of the person relative to the center of the screen 150, wherein the determined azimuth and range correspond to at least one viewing zone. The determination can further consider for example the apparent position of the user’s eyes or the user’s apparent head width. In this way, the viewing zone or zones potentially directly viewable by each user can be determined. As already mentioned, the user activity can also be considered to determine user presence.

Put another way, user presence in a viewing zone can, for example, be determined based on any one or more of the following criteria:

• a user is physically present in the viewing zone;

• a user’s head is present in the viewing zone;

• at least one of the user’s eyes is present in the viewing zone;

• the user’s eyes are watching the screen 150 (where user absence can be determined after a given time of the user looking away).

Having determined the viewing zones with user presence (i.e., active viewing zones), dimming or switching off of viewing zones without user presence, (i.e., empty viewing zones, in active viewing zones) can be performed using different mechanisms that can depend on the screen technology.

For an OLED display, energy is saved for each display pixel (or subpixel) that is dimmed or switched off. In an embodiment, each selected viewing zone corresponds to a subset of the display pixels and the pixels to be activated are the union of the subsets corresponding to the active viewing zones. In an embodiment, an image pixel is replicated or otherwise spread (e.g., with 1 D blurring) along the screen axis perpendicular to the lenticles so as to correspond to multiple display pixels (or subpixels), where each of the multiple display pixels contributes to one viewing zone and zero or more adjacent viewing zones. This can ensure that each image pixel is able to contribute to each viewing zone. Of the corresponding multiple display pixels, only those contributing to active viewing zones are provided at normal display brightness while those corresponding to inactive viewing zones are dimmed or switched off.

A LED display could rely on a similar determination of viewing zones, but rather than particular display pixels being selected for activation, corresponding backlight segments would be selected instead, where a backlight segment would primarily illuminate display pixel cells of a viewing zone, plus zero or more contiguous, adjacent viewing zones, to collectively represent a proper subset of all display pixels. Thus, lighting a particular backlight segment illuminates primarily display pixels of a corresponding viewing zone. In an embodiment, such backlight segments illuminate specific columns of pixel cells where those specific columns are presented by the lenticle or lenticles to a corresponding viewing zone.

There is the possibility of a crossover between these two OLED’ and ‘LED’ embodiments. The OLED’ display-pixel-wise determination could be passed to an LED display which then manages the backlight based on the display pixels to be displayed - where display pixels are dark, the backlight is algorithmically permitted to dim. Similarly, as in the ‘LED’ embodiment where particular columns of LED pixel cells are illuminated when a viewing zone is active, particular columns of OLED pixels might be enabled or not as a function of the OLED addressing electronics, rather than viewing zone selection being the basis for manipulating the display pixels individually. The determination of user presence in a viewing zone can be filtered temporally. The detection of a user appearing in a viewing zone can be quick so that the user is minimally affected, if at all, by the previously dimmed or dark pixels of that viewing zone. However, when user absence is detected - for example when the user has left a viewing zone, is looking away or is sleeping - leaving the viewing zone without any present users, the dimming or extinguishing of the pixels in that viewing zone may be delayed for example by a given, possibly pre-set, time, in case the absent user were to return immediately to the viewing zone (e.g., having just bent down to pick up a dropped object or having quickly consulted a TV Guide). Not only can this help minimize the potential impact on the viewer experience should there be latency in the sensor system, but it can also reduce sudden changes in the overall luminance of the room when the display contributes a substantial portion of the illumination in the room. A gradual reduction can also help minimize the impact on viewer experience. In an embodiment, the reduction further takes into account a general light level around or in front of the display

While the screen of a multi-zone display, which as mentioned can be used for 3D content, has been described, the present principles mainly apply to normal 2D video or images to be displayed. As a consequence, it can be assumed that the pixel resolution associated with a single zone is equal to the resolution of the received video. The horizontal pixel resolution of the display will be n times larger, with n being the number of zones. The vertical pixel resolution of the received video may be assumed to be equal to the vertical pixel resolution of the display. In this case, the additional processing that would take place is to replicate pixels of the incoming video to serve the viewing zones with user presence, while pixel replication is not needed for unoccupied regions. For other resolutions of the received video, up- or down-sampling may be performed on the incoming video such that the resolution becomes that of the pixel resolution associated with a single zone. Then, pixels can be replicated as described.

In embodiments in which zones are arrayed vertically, the vertical resolution of the received video can be assumed to be n times lower than the vertical display resolution to correspond with the resolution associated with a single zone. This configuration can also be achieved through resampling, similar to the case already described above. Here, pixel replication can be performed to match the eye-height of the users, while the pixels in the remaining viewing zones will remain black.

In displays with both horizontal and vertical lenticular arrays, pixel replication can be achieved analogously.

In embodiments in which multiview video is received by the display, pixel replication is not required. In this case, at least sone additional processing associated with unoccupied zones can be avoided. This pertains, for example, to the decoding of the video. In older codecs such regions of video are slices, in Versatile Video Coding (WC) they are subpicture regions and in scalable video they are spatial or temporal enhancement layers. Future codecs could be configured to facilitate selective decoding of pixel regions.

Fig. 3 illustrates a method in a display according to an embodiment of the present principles.

In step S30, the display (e.g., display 100 in Fig. 1) renders content.

In step S32, the display determines user presence in viewing zones associated with the display.

In step S34, the display lowers the luminance (i.e. , dims or switches off) of pixels corresponding to viewing zones without determined user presence.

The method can then return to step S30.

It will thus be appreciated that the present principles can be used to provide a display that can save energy by reducing the luminance in viewing zones that are not seen by any user.

It should be understood that the elements shown in the figures may be implemented in various forms of hardware, software or combinations thereof. Preferably, these elements are implemented in a combination of hardware and software on one or more appropriately programmed general-purpose devices, which may include a processor, memory and input/output interfaces.

The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its scope.

All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions.

Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e. , any elements developed that perform the same function, regardless of structure.

Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure. Similarly, it will be appreciated that any flow charts, flow diagrams, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.

The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, read only memory (ROM) for storing software, random access memory (RAM), and non-volatile storage.

Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.

In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.