Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HEAD-TRACKING MULTIVIEW DISPLAY AND METHOD
Document Type and Number:
WIPO Patent Application WO/2023/219638
Kind Code:
A1
Abstract:
A head-tracking multiview display includes a multibeam backlight that provides a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image. A processor receives a plurality of provisional views of the multiview image. The provisional views correspond to the different view directions. The processor receives information about a tracked position of a user. The processor shifts the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions. The shifted views are shifted by a shift value that varies as a function of the tracked position of the user and varies over a field of view of the multiview image. A light valve array modulates the plurality of light beams to provide the plurality of shifted views of the multiview image to the view directions as the multiview image.

Inventors:
FATTAL DAVID A (US)
SAMRA SUKENDEEP S (US)
Application Number:
PCT/US2022/043171
Publication Date:
November 16, 2023
Filing Date:
September 11, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LEIA INC (US)
International Classes:
G02B27/00; G02B30/20; H04N13/302; H04N13/366
Domestic Patent References:
WO2021150461A12021-07-29
Foreign References:
US20210311244A12021-10-07
US20160295200A12016-10-06
US20150077526A12015-03-19
US10855965B12020-12-01
Attorney, Agent or Firm:
JOHNSON, J. Michael (US)
Download PDF:
Claims:
-21-

CLAIMS

What is claimed is:

1. A head-tracking multiview display, comprising: a multibeam backlight configured to provide a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image; a processor configured to: receive a plurality of provisional views of the multiview image, the plurality of provisional views corresponding to the different view directions; receive information about a tracked position of a user; and shift the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions, the shifted views being shifted by a shift value that varies as a function of the tracked position of the user and varies over a field of view of the multiview image; and a light valve array configured to modulate light beams of the plurality of light beams to provide the plurality of shifted views of the multiview image to the view directions as the multiview image.

2. The head-tracking multiview display of claim 1, wherein the processor is further configured to shift the provisional views with respect to the view directions to form the shifted views by combining two adjacent provisional views.

3. The head-tracking multiview display of claim 2, wherein the processor is further configured to form the shifted views by non-linearly combining the two adjacent provisional views.

4. The head-tracking multiview display of claim 3, wherein the processor is further configured to form the shifted views by: converting pixel values of the two adjacent provisional view from gamma space to linear space to form linear space pixel values; summing the linear space pixel values linearly to form a linear sum value; and -22- converting the linear sum value from linear space to gamma space.

5. The head-tracking multiview display of claim 1, wherein the shift value corresponds to a non-integer number of views of the multiview image.

6. The head-tracking multiview display of claim 1, wherein the shift value is selected such that movement of the user allows the user to view views in a progression that matches a progression of the provisional views.

7. The head-tracking multiview display of claim 6, wherein the shift value is selected such that movement of the user provides the user a sensation of parallax as the views progress.

8. The head-tracking multiview display of claim 1, wherein the shift value is selected such that the shifted views follow movement of the user and allow the user to view only a single view or a single combination of views as the user moves.

9. The head-tracking multiview display of claim 8, wherein the shift value is selected to provide to the user a first three-dimensional view of the multiview image to a left eye of the user and a second three-dimensional view of the multiview image to a right eye of the user, the first and second three-dimensional views being invariant as the user moves.

10. The head-tracking multiview display of claim 1, further comprising a head tracker configured to provide the tracked position of the user, the head tracker comprising a camera configured to capture an image of the user, the head tracker further comprising an image processor configured to determine a position of the user within the captured image to provide the tracked position.

11. A method of operation for a head-tracking multiview display, the method comprising: providing, with a multibeam backlight, a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image; LI- 186

-23- receiving, with a processor, a plurality of provisional views of the multiview image, the plurality of provisional views corresponding to the different view directions; receiving, with the processor, information about a tracked position of a user; shifting, with the processor, the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions, the shifted views being shifted by a shift value that varies as a function of the tracked position of the user and varies over a field of view of the multiview image; and modulating, with a light valve array, light beams of the plurality of light beams to provide the plurality of shifted views of the multiview image to the view directions as the multiview image.

12. The method of claim 11, further comprising: combining, with the processor, two adjacent provisional views to form the shifted views.

13. The method of claim 11, further comprising: combining, with the processor, two adjacent provisional views non-linearly to form the shifted views.

14. The method of claim 13, wherein the processor forms the shifted views by: converting pixel values of the two adjacent provisional view from gamma space to linear space to form linear space pixel values; summing the linear space pixel values linearly to form a linear sum value; and converting the linear sum value from linear space to gamma space.

15. The method of claim 11, wherein the shift value corresponds to a noninteger number of views of the multiview image.

16. The method of claim 11, wherein the processor selects the shift value such that movement of the user allows the user to view views in a progression that matches a progression of the provisional views.

17. The method of claim 16, wherein the processor selects the shift value such that movement of the user provides the user a sensation of parallax as the views progress. LI- 186

-24-

18. The method of claim 11, wherein the processor selects the shift value such that the shifted views follow movement of the user and allow the user to view only a single view or a single combination of views as the user moves.

19. The method of claim 18, wherein the processor selects the shift value to provide to the user a first three-dimensional view of the multiview image to a left eye of the user and a second three-dimensional view of the multiview image to a right eye of the user, the first and second three-dimensional views being invariant as the user moves.

20. A head-tracking multiview display, comprising: a head tracker configured to provide a tracked position of a user with respect to the head-tracking multiview display, the head tracker comprising a camera configured to capture an image of the user, the head tracker further comprising an image processor configured to determine a position of the user within the captured image to provide the tracked position; a multibeam backlight configured to provide a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image; a processor configured to: receive a plurality of provisional views of the multiview image, the plurality of provisional views corresponding to the different view directions; and shift the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions, the shifted views being shifted by a shift value that varies as a function of the tracked position of the user and varies over a field of view of the multiview image; and a light valve array configured to modulate light beams of the plurality of light beams to provide the plurality of shifted views of the multiview image to the view directions as the multiview image.

Description:
HEAD-TRACKING MULTIVIEW DISPLAY AND METHOD

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims priority to U. S. Provisional Patent Application Serial No. 63/340,361, filed May 10, 2022, the entirety of which is incorporated by reference herein.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

[0002] N/A

BACKGROUND

[0003] Electronic displays are a nearly ubiquitous medium for communicating information to users of a wide variety of devices and products. Most commonly employed electronic displays include the cathode ray tube (CRT), plasma display panels (PDP), liquid crystal displays (LCD), electroluminescent displays (EL), organic light emitting diode (OLED) and active-matrix OLEDs (AMOLED) displays, electrophoretic displays (EP) and various displays that employ electromechanical or electrofluidic light modulation (e.g., digital micromirror devices, electrowetting displays, etc.). Generally, electronic displays may be categorized as either active displays (i.e., displays that emit light) or passive displays (i.e., displays that modulate light provided by another source). Examples of active displays include CRTs, PDPs and OLEDs/ AMOLED s. Displays that are typically classified as passive when considering emitted light are LCDs and EP displays. Passive displays, while often exhibiting attractive performance characteristics including, but not limited to, inherently low power consumption, may find somewhat limited use in many practical applications given the lack of an ability to emit light.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Various features of examples and embodiments in accordance with the principles described herein may be more readily understood with reference to the -2- following detailed description taken in conjunction with the accompanying drawings, where like reference numerals designate like structural elements, and in which: [0005] Figure 1 illustrates a perspective view of a multiview display in an example, according to an embodiment consistent with the principles described herein.

[0006] Figure 2 illustrates a graphical representation of the angular components of a light beam having a particular principal angular direction corresponding to a view direction of a multiview display in an example, according to an embodiment consistent with the principles described herein.

[0007] Figure 3 illustrates a cross sectional view of a head-tracking multiview display in an example, according to an embodiment consistent with the principles described herein.

[0008] Figure 4 illustrates a plan view of an example of a shift-invariant headtracking multiview display in which the shift value does not vary over a field of view the multiview image.

[0009] Figure 5A illustrates a plan view of a head-tracking multiview display used in peeling mode in an example, according to an embodiment consistent with the principles described herein, in which a user is located at a first position.

[0010] Figure 5B illustrates a plan view of the head-tracking multiview display used in peeling mode in an example, according to an embodiment consistent with the principles described herein, in which a user is located at a second position.

[0011] Figure 5C illustrates a plan view of the head-tracking multiview display used in peeling mode in an example, according to an embodiment consistent with the principles described herein, in which a user is located at a third position.

[0012] Figure 6A illustrates a plan view of a head-tracking multiview display used in sliding mode in an example, according to an embodiment consistent with the principles described herein, in which a user is located at a first position.

[0013] Figure 6B illustrates a plan view of the head-tracking multiview display used in sliding mode in an example, according to an embodiment consistent with the principles described herein, in which a user is located at a second position. LI- 186

-3-

[0014] Figure 6C illustrates a plan view of the head-tracking multiview display used in sliding mode in an example, according to an embodiment consistent with the principles described herein, in which a user is located at a third position.

[0015] Figure 7 illustrates a block diagram of a head-tracking multiview display system in an example, according to an embodiment consistent with the principles described herein.

[0016] Figure 8 illustrates a flow chart of a method of operation for a headtracking multiview display in an example, according to an embodiment consistent with the principles described herein.

[0017] Certain examples and embodiments have other features that are one of in addition to and in lieu of the features illustrated in the above-referenced figures. These and other features are detailed below with reference to the above-referenced figures.

DETAILED DESCRIPTION

[0018] Examples and embodiments in accordance with the principles described herein provide a multiview or three-dimensional (3D) image display that employs information about a user location over time, sometimes referred to as ‘head-tracking’ or alternatively as ‘head-tracked’. Embodiments consistent with the principles described herein may employ a multiview display to provide different sets of views of a scene represented by a multiview image depending on a location of a user. A head-tracking multiview display may receive provisional views of the multiview image corresponding to different view directions. The head-tracking (or head-tracked) multiview display may receive information about a tracked position of a user, such as from a head-tracking sensor. The head-tracking (or head-tracked) multiview display may shift the provisional views with respect to the view directions to form shifted views that correspond to the different view directions. The shifted views may be shifted by a shift value that varies as a function of the tracked position of the user and varies over a field of view of the multiview image.

[0019] Using a shift value that varies over a field of view of the multiview image may be beneficial. For example, in a display that uses a shift value that is invariant over a field of view of the multiview image, a user may perceive different views of the multiview image at different locations on the display, which may be disconcerting or LI- 186

-4- j airing to the user. As a specific example, a user may concurrently perceive one portion of the display as showing a first view, and another portion of the display as showing a second view. In contrast, in a display in which the shift value varies over a field of view of the multiview image as detailed below, the user may perceive a same view over a full field of view of the multiview image. Presenting the multiview image such that the user perceives the same view over a field of view of the multiview image (e.g., avoiding a situation in which a user perceives one portion of the display showing a first view and another portion of the display showing a second view) may provide a more realistic viewing experience for the user. In a specific example, one eye of the user may perceive a first view over a full field of view of the multiview image, and the other eye of the user may perceive a second view over a full field of view of the multiview image, such that the user may experience a stereoscopic effect.

[0020] In various embodiments, a head tracking sensor or algorithm may provide information about a position or location of the user relative to the multiview display.

That is, the user location may be determined or inferred by tracking a location of a user’s head or other anatomical feature. To facilitate discussion herein and without limitation, embodiments or components described herein may be referred to as including or using ‘head-tracking’ such as in multiview displays, systems, and methods that employ head tracking, for example.

[0021] Figure 1 illustrates a perspective view of a multiview display 10 in an example, according to an embodiment consistent with the principles described herein. As illustrated in Figure 1, the multiview display 10 comprises a screen 12 to display a multiview image to be viewed. The multiview display 10 provides different views 14 of the multiview image in different view directions 16 relative to the screen 12. The view directions 16 are illustrated as arrows extending from the screen 12 in various different principal angular directions. The different views 14 are illustrated as shaded polygonal boxes at the termination of the arrows (i.e., depicting the view directions 16). Only four views 14 and four view directions 16 are illustrated, all by way of example and not limitation. Note that while the different views 14 are illustrated in Figure 1 as being above the screen, the views 14 actually appear on or in a vicinity of the screen 12 when the multiview image is displayed on the multiview display 10. Depicting the views 14 LI- 186

-5- above the screen 12 is only for simplicity of illustration and is meant to represent viewing the multiview display 10 from a respective one of the view directions 16 corresponding to a particular view 14.

[0022] A view direction or equivalently a light beam having a direction corresponding to a view direction of a multiview display generally has a principal angular direction given by angular components {6, </)}, by definition herein. The angular component is referred to herein as the ‘elevation component’ or ‘elevation angle’ of the light beam. The angular component is referred to as the ‘azimuth component’ or ‘azimuth angle’ of the light beam. By definition, the elevation angle #is an angle in a vertical plane (e.g., perpendicular to a plane of the multiview display screen while the azimuth angle is an angle in a horizontal plane (e.g., parallel to the multiview display screen plane). Figure 2 illustrates a graphical representation of the angular components { 6, (f>} of a light beam 20 having a particular principal angular direction corresponding to a view direction (e.g., view direction 16 in Figure 1) of a multiview display in an example, according to an embodiment consistent with the principles described herein. In addition, the light beam 20 is emitted or emanates from a particular point, by definition herein. That is, by definition, the light beam 20 has a central ray associated with a particular point of origin within the multiview display. Figure 2 also illustrates the light beam (or view direction) point of origin O.

[0023] Further herein, the term ‘multiview’ as used in the terms ‘multiview image’ and ‘multiview display’ is defined as a plurality of views representing different perspectives or including angular disparity between views of the view plurality. In addition, herein the term ‘multiview’ explicitly includes more than two different views (i.e., a minimum of three views and generally more than three views), by definition herein. As such, ‘multiview display’ as employed herein is explicitly distinguished from a stereoscopic display that includes only two different views to represent a scene or an image. Note however, while multiview images and multiview displays include more than two views, by definition herein, multiview images may be viewed (e.g., on a multiview display) as a stereoscopic pair of images by selecting only two of the multiview views to view at a time (e.g., one view per eye). [0024] A ‘multiview pixel’ is defined herein as a set of sub-pixels representing ‘view’ pixels in each of a similar plurality of different views of a multiview display. In particular, a multiview pixel may have an individual sub-pixel corresponding to or representing a view pixel in each of the different views of the multiview image. Moreover, the sub-pixels of the multiview pixel are so-called ‘directional pixels’ in that each of the sub-pixels is associated with a predetermined view direction of a corresponding one of the different views, by definition herein. Further, according to various examples and embodiments, the different view pixels represented by the subpixels of a multiview pixel may have equivalent or at least substantially similar locations or coordinates in each of the different views. For example, a first multiview pixel may have individual sub-pixels corresponding to view pixels located at {xi, yi} in each of the different views of a multiview image, while a second multiview pixel may have individual sub-pixels corresponding to view pixels located in each of the different views, and so on.

[0025] Herein, a ‘light guide’ is defined as a structure that guides light within the structure using total internal reflection. In particular, the light guide may include a core that is substantially transparent at an operational wavelength of the light guide. In various examples, the term ‘light guide’ generally refers to a dielectric optical waveguide that employs total internal reflection to guide light at an interface between a dielectric material of the light guide and a material or medium that surrounds that light guide. By definition, a condition for total internal reflection is that a refractive index of the light guide is greater than a refractive index of a surrounding medium adjacent to a surface of the light guide material. In some embodiments, the light guide may include a coating in addition to or instead of the aforementioned refractive index difference to further facilitate the total internal reflection. The coating may be a reflective coating, for example. The light guide may be any of several light guides including, but not limited to, one or both of a plate or slab guide and a strip guide.

[0026] Further herein, the term ‘plate’ when applied to a light guide as in a ‘plate light guide’ is defined as a piecewise or differentially planar layer or sheet, which is sometimes referred to as a ‘slab’ guide. In particular, a plate light guide is defined as a light guide configured to guide light in two substantially orthogonal directions bounded -7- by a top surface and a bottom surface (i.e., opposite surfaces) of the light guide. Further, by definition herein, the top and bottom surfaces are both separated from one another and may be substantially parallel to one another in at least a differential sense. That is, within any differentially small section of the plate light guide, the top and bottom surfaces are substantially parallel or co-planar.

[0027] In some embodiments, the plate light guide may be substantially flat (i.e., confined to a plane) and therefore, the plate light guide is a planar light guide. In other embodiments, the plate light guide may be curved in one or two orthogonal dimensions. For example, the plate light guide may be curved in a single dimension to form a cylindrical shaped plate light guide. However, any curvature has a radius of curvature sufficiently large to ensure that total internal reflection is maintained within the plate light guide to guide light.

[0028] Herein, a Tight source’ is defined as a source of light (e.g., an optical emitter configured to produce and emit light). For example, the light source may comprise an optical emitter such as a light emitting diode (LED) that emits light when activated or turned on. In particular, herein the light source may be substantially any source of light or comprise substantially any optical emitter including, but not limited to, one or more of a light emitting diode (LED), a laser, an organic light emitting diode (OLED), a polymer light emitting diode, a plasma-based optical emitter, a fluorescent lamp, an incandescent lamp, and virtually any other source of light. The light produced by the light source may have a color (i.e., may include a particular wavelength of light), or may be a range of wavelengths (e.g., white light). In some embodiments, the light source may comprise a plurality of optical emitters. For example, the light source may include a set or group of optical emitters in which at least one of the optical emitters produces light having a color, or equivalently a wavelength, that differs from a color or wavelength of light produced by at least one other optical emitter of the set or group. The different colors may include primary colors (e.g., red, green, blue) for example.

[0029] Herein, a ‘multiview image’ is defined as a plurality of images (i.e., greater than three images) wherein each image of the plurality represents a different view corresponding to a different view direction of the multiview image. As such, the multiview image is a collection of images (e.g., two-dimensional images) which, when display on a multiview display, may facilitate a perception of depth and thus appear to be an image of a 3D scene to a user, for example. A multiview image that provides pairs of views that represent different but related perspectives of a 3D scene consistent with viewing by a user is defined as a 3D image.

[0030] Embodiments consistent with the principles described herein may be implemented using a variety of devices and circuits including, but not limited to, one or more of integrated circuits (ICs), very large scale integrated (VLSI) circuits, application specific integrated circuits (ASIC), field programmable gate arrays (FPGAs), digital signal processors (DSPs), graphical processor unit (GPU), and the like, firmware, software (such as a program module or a set of instructions), and a combination of two or more of the above. For example, an embodiment or elements thereof may be implemented as circuit elements within an ASIC or a VLSI circuit. Implementations that employ an ASIC or a VLSI circuit are examples of hardware-based circuit implementations.

[0031] In another example, an embodiment may be implemented as software using a computer programming language (e.g., C/C++) that is executed in an operating environment or a software-based modeling environment (e.g., MATLAB®, MathWorks, Inc., Natick, MA) that is further executed by a computer (e.g., stored in memory and executed by a processor or a graphics processor of a general-purpose computer). Note that one or more computer programs or software may constitute a computer-program mechanism, and the programming language may be compiled or interpreted, e.g., configurable or configured (which may be used interchangeably in this discussion), to be executed by a processor or a graphics processor of a computer.

[0032] In yet another example, a block, a module or an element of an apparatus, device or system (e.g., image processor, camera, etc.) described herein may be implemented using actual or physical circuitry (e.g., as an IC or an ASIC), while another block, module or element may be implemented in software or firmware. In particular, according to the definitions herein, some embodiments may be implemented using a substantially hardware-based circuit approach or device (e.g., ICs, VLSI, ASIC, FPGA, DSP, firmware, etc.), while other embodiments may also be implemented as software or -9- firmware using a computer processor or a graphics processor to execute the software, or as a combination of software or firmware and hardware-based circuitry, for example. [0033] Further, as used herein, the article ‘a’ is intended to have its ordinary meaning in the patent arts, namely ‘one or more’. For example, ‘a lens’ means one or more lenses and as such, ‘the lens’ means ‘the lens or lenses’ herein. Also, any reference herein to ‘top’, ‘bottom’, ‘upper’, ‘lower’, ‘up’, ‘down’, ‘front’, back’, ‘first’, ‘second’, ‘left’ or ‘right’ is not intended to be a limitation herein. Herein, the term ‘about’ when applied to a value generally means within the tolerance range of the equipment used to produce the value, or may mean plus or minus 10%, or plus or minus 5%, or plus or minus 1%, unless otherwise expressly specified. Further, the term ‘substantially’ as used herein means a majority, or almost all, or all, or an amount within a range of about 51% to about 100%. Moreover, examples herein are intended to be illustrative only and are presented for discussion purposes and not by way of limitation.

[0034] According to some embodiments of the principles described herein, a head-tracking multiview display 100 is provided. The head-tracking multiview display 100 may be referred to as a head-tracked multiview display, in some embodiments. Figure 3 illustrates a cross sectional view of a head-tracking multiview display 100 in an example, according to an embodiment consistent with the principles described herein. The head-tracking multiview display 100 is configured to provide a plurality of views of a scene as a multiview image, i.e., a displayed multiview image. In particular, the plurality of views is provided in a corresponding plurality of view directions by the head-tracking multiview display 100. In Figure 3, the view directions or equivalently the views of the view plurality are depicted as arrows 102 pointing in different angular directions that extend from the head-tracking multiview display 100. For convenience, the view directions or equivalently the views of the view plurality are labeled with viewidentifying numerals 104. In the specific example of Figure 3, the head-tracking multiview display 100 provides eight views; other numbers of views may also be used. The views in Figure 3 are identified with view-identifying numerals 104 as being 1 through 8; other identifiers may also be used, such as letters A through H. The viewidentifying numerals 104 of Figure 3 are provided merely as a convenience, and do not imply that the views are sequential or ordered in any way. LI- 186

-10-

[0035] As illustrated in Figure 3, the head-tracking multiview display 100 comprises a multibeam backlight 106. The multibeam backlight 106 is configured to provide a plurality of light beams having different principal angular directions corresponding to different view directions of the multiview image. In the configuration of Figure 3, the multibeam backlight 106 may include a light source 108, which is controlled by a controller 110. The light source 108 may direct light into an edge of a light guide 112. The light guide 112 may propagate the light within the light guide 112 as guided light. A plurality of multibeam elements 114 may direct respective portions of the guided light out of the light guide 112 to form the plurality of light beams. The configuration of the multibeam backlight 106 shown in Figure 3 is but one example of a multibeam backlight 106; other suitable configurations may also be used.

[0036] As illustrated in Figure 3, the head-tracking multiview display 100 further comprises a processor 116. In the example of Figure 3, the processor 116 is included with the controller 110. In other configurations, the processor 116 may be separate from the controller 110. The processor 116 is configured to receive a plurality of provisional views of the multiview image. The plurality of provisional views may correspond to the different view directions. The processor 116 is further configured to receive information about a tracked position of a user 118, such as including information about a direction or distance between the user 118 and a light-emitting portion of the display. In an example, the tracked position of the user 118 may include information about a view direction of the user 118. The processor 116 is further configured to shift the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions. In various embodiments, the shifted views are shifted by or according to a shift value. The shift value may vary as a function of the tracked position of the user 118. In various embodiments, the shift value may also vary over a field of view of the multiview image. Details are provided below regarding calculation of the shift value.

[0037] As illustrated in Figure 3, the head-tracking multiview display 100 further comprises a light valve array 120. The light valve array 120 is configured to modulate light beams of the plurality of light beams to provide the plurality of shifted views of the multiview image to the view directions as the multiview image. The light valve array 120 LI- 186

-11- may be controlled by the controller 110. In various embodiments, any of a variety of different types of light valves may be employed as light valves of the light valve array 120 including, but not limited to, one or more of liquid crystal light valves, electrophoretic light valves, and light valves based on electrowetting.

[0038] As illustrated in Figure 3, the head-tracking multiview display 100 further comprises a head tracker 122. The head tracker 122 is configured to provide the tracked position of the user 118 (e.g., of a head of the user 118, or of one or both eyes of the user 118, or of another anatomical feature of the user 118). The head tracker 122 may include a camera configured to capture an image of the user 118. The head tracker 122 may further include an image processor (or general-purpose computer programmed as an image processor) configured to determine a position of the user 118 within the captured image to provide the tracked position. In some examples, the processor 116 may include the image processor of the head tracker 122, such as by performing operations with the same processing circuitry. In other examples, the processor 116 may be separate from the image processor of the head tracker 122. Other suitable head trackers may also be used, including head trackers based on lidar (e.g., using time-of-flight of reflected light over a scene to of view to determine distances to one or more objects in the scene, such as a user’s head or a user’s eyes) or other technologies. An output of the head tracker 122 may be used to modify an operation (e.g., modulation of light beams by the light valve array 120) of the head-tracking multiview display 100. For example, the determined position of the user 118 may be provided to one or both of a processor (such as processor 116) and a light valve driver (e.g., driver circuit, or controller 110) of the head-tracking multiview display 100 to adjust the emission pattern from the head-tracking multiview display 100 to correspond to the user’s position. Other examples of the head tracker 122 implementations may include any of a variety of two-dimensional (2D) and three- dimensional (3D) object tracking systems such as, but are not limited to, Kinect® object tracking system. Kinect® is a registered trademark of Microsoft Corporation, Redmond, WA.

[0039] As mentioned above, and explained in detail below, the head-tracking multiview display 100 of Figure 3 determines a shift value that varies over a field of view of the multiview image. To demonstrate the effect of varying the shift value over a field LI- 186

-12- of view of the multiview image, Figure 4 illustrates a plan view of an example of a shiftinvariant head-tracking multiview display 400 in which the shift value does not vary over a field of view of the multiview image. Instead, the shift value is constant or invariant over a field of view of the multiview image, such as over a field of view of an active surface area of the shift-invariant head-tracking multiview display 400.

[0040] In the example of Figure 4, for some locations of the user 402, different regions of the multiview image may be visible with different views of the multiview image. For example, a top-left comer of the multiview image may be visible with view number 1, a central portion of the multiview image may be visible with view number 2, and a bottom-right comer of the multiview image may be visible with view number 3. Because the image perceived by the user 402 at the user location relative to the shiftinvariant head-tracking multiview display 400 may include regions that are from different views of the multiview image, the perceived image may be distorted or may contain artifacts from misalignment of the views or at boundaries between the different view portions.

[0041] Varying the shift value over a field of view of the multiview image may help overcome a drawback of the shift-invariant head-tracking multiview display 400. Specifically, varying the shift value over a full field of view the multiview image may help ensure that all or substantially all of the multiview image shows a single view (or a single combination of views) to the user. The following paragraphs describe an example of determination of the shift value in detail.

[0042] The processor 116 may determine the shift value to provide two general functions for the head-tracking multiview display 100. A first general function of the shift value is to direct a specified view toward an eye of the user 118, as the user 118 changes location with respect to the display. A second general function of the shift value is to ensure that over the entire multiview image (such as extending over a specified active surface area of the head-tracking multiview display), only a single specified view (or a specified combination of views) is visible, such that adjacent views or spurious views are not visible. In other words, calculating the shift value as detailed below may ensure that a user 118 does not see one portion of the multiview image in a first view and LI- 186

-13- another portion of the multiview image in a second view, thereby avoiding the example of the shift-invariant head-tracking multiview display 400 of Figure 4.

[0043] The processor 116 may include a mathematical model that predicts which view is visible to the user 118 as a function of the user 118 location and as a function of the user 118 view direction at or over a surface area of the head-tracking multiview display. The view may be expressed as a real number. For example, for a multiview image that includes eight views, the views may be represented by an integer TV, where N may have an integer value from one to eight, inclusive. The user 118 may have a center head position, expressed in spatial coordinates as (x, y, z). The head-tracking multiview display may have a pixel position, expressed in spatial coordinates as (xo, o), within an active surface area of the head-tracking multiview display. The mathematical model may determine the view number TV(x, y, z, xo, jo), which is a function of user location and pixel position. In other words, for each combination of user location (x, y, z) and pixel position (xo, o), the mathematical model may determine which view TV is visible.

[0044] The shift value, represented as quantity <5TV, is a dimensionless quantity that represents a correction or addition to the view number TV. For example, a shift value dN of +1 may involve incrementing a view number (e.g., view number 3) by a value of +1 (e.g., to view number 4). The shift value dN may vary from pixel position to pixel position over an active surface area of the head-tracking multiview display (e.g., over a field of view of the multiview image). Allowing the shift value dN to vary over a field of view of the multiview image may ensure that a user 118 sees only one specified multiview image (or one specified combination of multiview images) over the entire multiview image.

[0045] The processor may calculate the shift value to achieve a particular effect in the perceived views. Two such effects are referred to below as a “peeling mode” and a “sliding mode,” both of which are explained in greater detail below.

[0046] In the peeling mode, when a user 118 changes location, the views may progress from view to view, so that the user 118 may view the multiview image from many different views, depending on the location of the user 118. In some embodiments, in the peeling mode, the shift value may be selected such that movement of the user 118 allows the user 118 to view views in a progression that matches a progression of the LI- 186

-14- provisional views. In some embodiments, in the peeling mode, the shift value may be selected such that movement of the user 118 provides the user a sensation of parallax as the views progress.

[0047] The peeling mode may extend a view zone of the head-tracking multiview display without altering the parallax effect. In the peeling mode, the shift value 3N may be calculated as a function of a static view number No that is invariant with respect to user location and pixel position, a central view number N(x, y, z, 0, 0) that represents which view is visible at a center of the display from the user location, and a pixel-dependent view number N(x, y, z, xo, o) that represents which view is visible at a specified pixel position from the user location, as follows:

3N= No + N(x, , z, 0, 0) - N(x, y, z, xo, o) (1)

[0048] Figure 5A illustrates a plan view of a head-tracking multiview display 500 used in peeling mode in an example, according to an embodiment consistent with the principles described herein, in which a user 502 is located at a first position. At the first position, the user 502 may view a first view (denoted by numeral 1) of the multiview image over the entire multiview image.

[0049] Figure 5B illustrates a plan view of the head-tracking multiview display 500 used in peeling mode in an example, according to an embodiment consistent with the principles described herein, in which a user 502 is located at a second position. At the second position, the user 502 may view a second view (denoted by numeral 2) of the multiview image over the entire multiview image.

[0050] Figure 5C illustrates a plan view of the head-tracking multiview display 500 used in peeling mode in an example, according to an embodiment consistent with the principles described herein, in which a user 502 is located at a third position. At the third position, the user 502 may view a third view (denoted by numeral 3) of the multiview image over the entire multiview image.

[0051] In the sliding mode, when the user 118 changes location, a single view (or a single combination of views) follows the user 118. For example, the sliding mode may act effectively as an image stabilizer, which may ensure that the user 118 sees only a specified view (or a specified combination of views) and prevents other views from being LI- 186

-15- visible as the user 118 changes location. In some embodiments, in the sliding mode, the shift value may be selected such that the shifted views follow movement of the user 118 and allow the user 118 to view only a single view or a single combination of views as the user 118 moves. The sliding mode may ensure that the user 118 experiences a uniform, static view, denoted by the quantity No, with no parallax. In the sliding mode, the headtracking may effectively counteract any motion of the user 118 and may provide the user 118 with a 3D experience similar to using 3D glasses in a movie theater.

[0052] Using the same quantities as defined above, the shift value dN may be calculated as follows:

3N=No - N(x, y, z, xo, o) (2)

[0053] Figure 6A illustrates a plan view of a head-tracking multiview display 600 used in sliding mode in an example, according to an embodiment consistent with the principles described herein, in which a user 602 is located at a first position. At the first position, the user 602 may view a first view (denoted by numeral 1) of the multiview image over the entire multiview image.

[0054] Figure 6B illustrates a plan view of the head-tracking multiview display 600 used in sliding mode in an example, according to an embodiment consistent with the principles described herein, in which a user 602 is located at a second position. At the second position, the user 602 may view the first view (denoted by numeral 1) of the multiview image over the entire multiview image.

[0055] Figure 6C illustrates a plan view of the head-tracking multiview display 600 used in sliding mode in an example, according to an embodiment consistent with the principles described herein, in which a user 602 is located at a third position. At the third position, the user 602 may view the first view (denoted by numeral 1) of the multiview image over the entire multiview image.

[0056] In some embodiments, the shift value may be selected to a first three- dimensional view of the multiview image to a left eye of the user 118 and a second three- dimensional view of the multiview image to a right eye of the user 118, the first and second three-dimensional views being invariant as the user 118 moves. LI- 186

-16-

[0057] In general, the shift value dN may not be an integer, but may be a sum of an integer k and a decimal q that is between zero and one, exclusive. For example, in some embodiments, the shift value may correspond to a non-integer number of views of the multiview image. As another example, in some embodiments, the shift value may correspond to a non-integer number of light valves on the light valve array. Where a purely integer shift value dN may involve shifting one view to an angular location of another view, a fractional (e.g., non-integer) shift value dN may involve combining adjacent views to form a combined view or a fractional view. There are many possible ways to combine adjacent views to form the combined view. In some embodiments, the processor may be configured to form the shifted views by combining two adjacent provisional views. In some embodiments, the processor may be configured to form the shifted views by non-linearly combining the two adjacent provisional views, i.e., combining the two adjacent provisional views in a non-linear manner.

[0058] In some embodiments, the processor is configured to form the shifted views by converting pixel values of the two adjacent provisional view from gamma space to linear space to form linear space pixel values, summing the linear space pixel values linearly to form a linear sum value, and converting the linear sum value from linear space to gamma space. For example, a non-linear mixing function fig) may be defined to have a value of 0 for =0, a value of 0.5 for <7=0.5, and a value of 1 for </=l, a dimensionless exponent y may convert red-green-blue (RBG) pixel values from gamma space to linear space, pixel value of a view k may be denoted by quantity Vk, a pixel value of an adjacent view k+ may be denoted by quantity Vk+ 1, and the pixel value of a fractional view k+q may be denoted by J4+ ? , and calculated as follows:

In some examples in which different color sub-pixels correspond to different views, the shift value dN may be calculated separately for each color.

[0059] In accordance with some embodiments of the principles described herein, a head-tracking multiview display system is provided. The head-tracking multiview display system is configured to provide or to ‘display’ a 3D or multiview image representing a scene. In particular, the multiview image is provided as a plurality of LI- 186

-17- different ‘views’ associated with the multiview image. The different views may provide a ‘glasses free’ (e.g., autostereoscopic) representation of information in the multiview image being displayed, for example. Moreover, different sets of views may be provided for different locations or positions (e.g., head locations) of a user of the head-tracking multiview display system, according to various embodiments.

[0060] Figure 7 illustrates a block diagram of a head-tracking multiview display system 200 in an example, according to an embodiment consistent with the principles described herein. The head-tracking multiview display system 200 is configured to display a multiview image according to different views in different view directions. In particular, light beams emitted by the head-tracking multiview display system 200 are used to display a multiview image and may correspond to pixels of the different views (i.e., view pixels). The different views or equivalently different view directions are illustrated as arrows 202 emanating from the head-tracking multiview display system 200 in Figure 7. As provided below, the arrows 202 also represent the light beams emitted by the head-tracking multiview display system 200.

[0061] The head-tracking multiview display system 200 includes a head-tracking multiview display 210. The head-tracking multiview display 210 includes a multibeam backlight configured to provide a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image. The head-tracking multiview display 210 includes a processor configured to receive a plurality of provisional views of the multiview image, the plurality of provisional views corresponding to the different view directions, receive information about a tracked position of a user 230, and shift the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions, the shifted views being shifted by a shift value that varies as a function of the tracked position of the user 230 and varies over a field of view of the multiview image. The head-tracking multiview display 210 includes a light valve array configured to modulate light beams of the plurality of light beams to provide the plurality of shifted views of the multiview image to the view directions as the multiview image.

[0062] The head-tracking multiview display system 200 includes a head tracker 220 configured to provide the tracked position of the user 230 to the processor of the head-tracking multiview display 210. The head tracker 220 may include a camera having a field of view 222 that includes the user 230. The camera is configured to capture an image of the user 230. The head tracker 220 may further include an image processor configured to determine a position of the user 230 within the captured image to provide the tracked position.

[0063] In accordance with other embodiments of the principles described herein, a method of operation for a head-tracking multiview display is provided. Figure 8 illustrates a flow chart of a method 300 of operation for a head-tracking multiview display in an example, according to an embodiment consistent with the principles described herein.

[0064] As illustrated in Figure 8, the method 300 of operation for a head-tracking multiview display comprises providing 310, with a multibeam backlight, a plurality of light beams having different principal angular directions corresponding to different view directions of a multiview image. The multibeam backlight may be similar in structure and function to the multibeam backlight 106.

[0065] As illustrated in Figure 8, the method 300 of operation for a head-tracking multiview display further comprises receiving 320, with a processor, a plurality of provisional views of the multiview image. The plurality of provisional views corresponds to the different view directions. The provisional views may represent original, unmodified, or unshifted portions of a 3D image to be display to a user using the multiview display. The processor may be similar in structure and function to the processor 116.

[0066] As illustrated in Figure 8, the method 300 of operation for a head-tracking multiview display further comprises shifting 330, with the processor, the provisional views with respect to the view directions to form a plurality of shifted views that correspond to the different view directions. The shifted views are shifted by a shift value that varies as a function of the tracked position of the user and varies over a field of view of the multiview image.

[0067] As illustrated in Figure 8, the method 300 of operation for a head-tracking multiview display further comprises modulating 340, with a light valve array, light beams of the plurality of light beams to provide the plurality of shifted views of the multiview -19- image to the view directions as the multiview image. The light valve array may be similar in structure and function to the light valve array 120.

[0068] In some embodiments, the method 300 of operation for a head -tracking multiview display further comprises combining, with the processor, two adjacent provisional views to form the shifted views. In some embodiments, the method 300 of operation for a head-tracking multiview display further comprises combining, with the processor, two adjacent provisional views non-linearly to form the shifted views. In some embodiments, the processor forms the shifted views by converting pixel values of the two adjacent provisional view from gamma space to linear space to form linear space pixel values, summing the linear space pixel values linearly to form a linear sum value, and converting the linear sum value from linear space to gamma space. In some embodiments, the shift value corresponds to a non-integer number of views of the multiview image. In some embodiments, such as in the peeling mode described above, the processor selects the shift value such that movement of the user allows the user to view views in a progression that matches a progression of the provisional views. In some embodiments, such as in the peeling mode described above, the processor selects the shift value such that movement of the user provides the user a sensation of parallax as the views progress.

[0069] In some embodiments, such as in the sliding mode described above, the processor selects the shift value such that the shifted views follow movement of the user and allow the user to view only a single view or a single combination of views as the user moves. In some embodiments, such as in the sliding mode described above, the processor selects the shift value to provide to the user a first three-dimensional view of the multiview image to a left eye of the user and a second three-dimensional view of the multiview image to a right eye of the user. The first and second three-dimensional views may be invariant as the user moves.

[0070] Thus, there have been described examples and embodiments of a headtracking multiview display, a head-tracking multiview display system, and a method of operation for a head-tracking multiview display in which provisional views are shifted with respect to the view directions to form a plurality of shifted views that correspond to the different view directions, the shifted views being shifted by a shift value that varies as LI-186

-20- a function of the tracked position of the user and varies over a field of view of the multiview image. The above-described examples are merely illustrative of some of the many specific examples that represent the principles described herein. Clearly, those skilled in the art may readily devise numerous other arrangements without departing from the scope as defined by the following claims.