Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISPLAY SYSTEM, MOBILE DEVICE AND METHOD FOR PROVIDING THREE-DIMENSIONAL VIEWS
Document Type and Number:
WIPO Patent Application WO/2019/086635
Kind Code:
A1
Abstract:
Disclosed is a display system (10) for providing three-dimensional views. The display system comprises a screen (100), at least one control unit (300) and at least one eye-tracking system (200) for detecting, in three dimensions, at least one eye position of a viewer of the screen. The control unit (300) is configured to operate the plurality of pixels (120) based on a detected three-dimensional eye position, so as to autostereoscopically provide a respective content as a three-dimensional view. Further disclosed are a mobile device (1) comprising a display (system 10) and a method for autostereoscopically providing three-dimensional views on a screen (100).

Inventors:
HARDER HANNES (DE)
Application Number:
PCT/EP2018/080061
Publication Date:
May 09, 2019
Filing Date:
November 02, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNITED SCREENS GMBH (DE)
International Classes:
H04N13/305; H04N13/356
Domestic Patent References:
WO2017114755A12017-07-06
Foreign References:
US20170155893A12017-06-01
US20150054739A12015-02-26
EP2421275A22012-02-22
Other References:
None
Attorney, Agent or Firm:
MARSCHALL, Stefan (DE)
Download PDF:
Claims:
Claims

Display system (10) comprising

- a autostereoscopic screen (100),

- at least one control unit (300) for controlling a plurality of pixels (120) of the screen, and

- at least one eye-tracking system (200) for detecting, in

three dimensions, a respective eye position of a viewer of the screen,

wherein the control unit (300) is configured to operate the plurality of pixels (120) based on at least one detected three- dimensional eye position, so as to autostereoscopically provide a respective content as a three-dimensional view.

Display system according to claim 1, wherein the eye-tracking system (200) is further adapted to detect an eyes convergence point of the viewer' s eyes, and wherein, to

autostereoscopically provide the three-dimensional view, the control unit is configured to operate the plurality of pixels also based on the detected eyes convergence point.

Display system according to one of claims 1 or 2, wherein the screen (100) comprises at least one lenticular lens system (110) ,

and wherein operating the plurality of pixels based on the detected eye position and/or eye convergence point comprises adjusting the operation of the plurality of pixels (120) to an effect the at least one lenticular lens system (110) has at the detected three-dimensional eye position/s and/or with the detected eyes convergence point.

Display system according to one of the preceding claims, wherein the control unit (300) is configured to update

operation of the plurality of pixels (120) responsive to a change of a detected three-dimensional eye position and/or of a detected eyes convergence point, while proceeding with

autostereoscopically providing the respective content as the three-dimensional view.

5. Display system according to one of the preceding claims, wherein the display system (10) is adapted to selectively provide two-dimensional views and/or three-dimensional views.

6. Display system according to claim 5, wherein providing a

respective one of a two-dimensional view and a three- dimensional view is effectable automatically.

7. Display system according to one of the preceding claims,

further comprising at least one infrared light source (202) illuminating at least a portion of the viewer's face.

8. Display system according to one of the preceding claims,

further comprising a stereo camera module (500) for recording image data, wherein the control unit (300) is configured to control the plurality of pixels so as to autostereoscopically provide a three-dimensional live view of the respective recording .

9. Display system according to one of the preceding claims,

further comprising connection means (701) facilitating dual data input of external content to be provided, three- dimensionally or two-dimensionally, on the screen.

10. Mobile device (1) including a display system (10) according to one of the preceding claims.

11. Method comprising

- detecting, by means of an eye-tracking system (200) and in three dimensions, at least one eye position of a viewer of an autostereoscopic screen, and

- operating a plurality of pixels (120) of the

autostereoscopic screen based on a detected three- dimensional eye position so as to autostereoscopically provide a content as the three-dimensional view.

12. Method according to claim 11, further comprising detecting an eyes convergence point of the viewer' s eyes,

wherein the operating of the plurality of pixels is further based on the detected eyes convergence point .

13. Method according to one of claims 11 or 12, wherein operating the plurality of pixels (120) of the screen (100) comprises adjusting the operation of the plurality of pixels to an effect at least one lenticular lens system (110) comprised by the screen has at the detected three-dimensional eye position and/or with the detected eyes convergence point.

14. Method according to one of claims 11 to 13, comprising

detecting a changed three-dimensional eye position and/or eyes convergence point, and updating operation of the plurality of pixels (120) while proceeding with autostereoscopically providing the content as the three-dimensional view.

15. Method according to one of claims 11 to 14, further comprising switching between providing two-dimensional views and providing three-dimensional views.

16. Method according to one of claims 11 to 15, wherein the eye- tracking system comprises at least one camera (211) and/or at least one infrared light source (212) .

17. Method according to one of claims 11 to 16, further comprising recording image data by means of a stereo camera module (500) and autostereoscopically providing a three-dimensional live view of the respective recording.

18. Method according to one of claims 11 to 17, further comprising receiving dual data input of external content from at least one external data source (701),

and wherein the content autostereoscopically provided as a three-dimensional view comprises the external content.

Description:
Display system, mobile device and method for providing three- dimensional views

Description

The present invention relates to a display system adapted to provide three-dimensional views, to a mobile device comprising such display system and to a method for providing views on a screen.

To facilitate depth perception in displayed images, various

techniques have been developed which are usually based on the principles of binocular vision. Different types of stereoscopic images are known, for which respective viewing devices such as active shutter 3D systems or passive polarisation or interference filter systems may be required to experience the 3D effect.

Additionally, autostereoscopic 3D displays have been devised which do without particular glasses or other viewing devices. Such displays may rely on parallax barriers, microlenses, lenticular lenses or the like, which may be integrated in a respective device, and they may involve eye-tracking for appropriately aligning the three-dimensional effect with a respective viewing angle. For example, such autostereoscopic 3D displays are included in certain game consoles and some digital cameras .

It is an object of the present invention to provide an improved technique for presenting visual information, in particular three- dimensional views .

The problem is solved by a display system according to claim 1, a mobile device according to claim 9 and a method according to claim 10. Preferred embodiments are disclosed in the dependent claims, the description and the figures.

A display system according to the present invention comprises an autostereoscopic screen, at least one control unit for controlling a plurality of pixels of the autostereoscopic screen and an eye- tracking system for detecting, in three dimensions, a (current) eye position (preferably of each of both eyes) of a viewer. The control unit is configured to operate the plurality of pixels based on a detected three-dimensional eye position (preferably of each of the eyes of a user) . By thus operating the plurality of pixels, a respective content is autostereoscopically displayed on the

autostereoscopic screen as a three-dimensional view.

As is to be understood, in this document, the term "detecting a position of point in three dimensions" refers to the act of

identifying the 3-D location of the respective position relative to the autostereoscopic screen.

In the following, to improve intelligibility, the adjective

"autostereoscopic" referring to the screen is sometimes omitted.

The respective content may be or have been at least partially recorded or generated by the display device, and/or it may at partially be stored in or supplied to the display device.

A method according to the present invention comprises detecting, by means of an eye-tracking system and in three dimensions, an eye position (of one or of both eyes, respectively) of a viewer of an autostereoscopic screen. The method further comprises operating a plurality of pixels of the screen based on a detected three- dimensional eye position (preferably of each of the eyes of a user) , so as to autostereoscopically provide a content as the three- dimensional view (which thus can be captured, by the viewer, without particular headgear/ glasses) .

By operating the plurality of pixels based on a detected 3-D position of the viewer's eye(s), the present invention facilitates optimally adapting the autostereoscopic provision of the content to the viewer and the spatial conditions (in particular the position, the viewing angle and distance) under which he is viewing the screen. In particular, a sweet spot (i.e., a viewer's position devised for optimal perception of the three-dimensional view) is made to be not system inherent, but adaptable to the respective spatial conditions the viewer is watching the screen. Indeed, the autostereoscopic provision of the three-dimensional view is usually based on subviews (channels) generated by means of a pattern with regions (such as stripes) each containing information designated for a respective one of the viewer' s eyes, whereas the respective other eye is prevented to receive the information. The zones of the screen actually reached by a respective eye, however, depend on the eye' s lateral attitude relative to the screen and on its distance therefrom, thus, on the three-dimensional position of the eye. According to an aspect of the present invention, said three-dimensional position is taken into account for the control of the pixels. In particular, the control unit is preferably configured to determine which pixels are seen by each eye in the respectively detected three-dimensional eye-position ( s ) , and the pixels may be controlled such that said pattern is arranged so as to provide for the respective eye reaching precisely the region containing the information intended for this eye, thus preventing crosstalk. That is, the provided view can be optimised in regard to subview

separation (left and right view) and the respectively featured three-dimensional effect, and an indistinctness and adulteration of the provided view can be prevented.

In particular, by means of "tracking" the z-axis (orthogonal to a screen plane) of each eye it is possible to accurately control the plurality of pixels so as to precisely adjust the viewing area of each eye. A change in z-axis position of an eye leads to an x or y or an xy offset of some of the visible pixels (more precisely: of all of the pixels that are seen under an angle greater than 0° from the orthogonal of the screen) of that eye. This change of visible pixels caused by an eye's change of its z-axis position is

preferably taken into account during the sub-pixel adjustment process (controlling the plurality of pixels) for each eyes pixels.

The provision of the three-dimensional view is preferably based on the simultaneous presentation of exactly two images (i.e., as a so- called "single-view display", as opposed to a multiview technique, where more than two, usually at least five and/or up to nine subimages are presented simultaneously) : The waiving of employing multiview technique on the one hand and the three-dimensional eye- tracking on the other hand facilitate a broad compatibility with existing content (which thus can be rendered by the display device) , an improved performance in particular of interactive contents (such as video games) and reduced filesizes required for the respective contents to be rendered, while nevertheless relieving the viewer from having to take a fix viewing position.

According to preferred embodiments of an inventive display system, the eye-tracking system is further adapted to detect a (current) convergence point of the eyes of the viewer (further referred to as "eyes convergence point") . The control unit may then be configured to operate the piurality of pixels based also on the eyes

convergence point

Analogously, the method according to the present invention may preferably comprise detecting a (current) eyes convergence point. The operating of the plurality of pixels may then be also based on the detected eyes convergence point (further to the at least one eye position) .

By thus operating the plurality of pixels based on a detected convergence point of the viewer's eyes, the provided view can be adapted to a point the viewer is currently focussing and, therewith, to his current gaze. In particular, the eyes convergence point may be interpreted as relating to a depth (of a respectively represented three-dimensional object) being focused by the viewer, and the control unit may be configured (by means of a respective software) to operate the plurality of pixels so as to provide the view with an image focus coinciding with the determined depth. Thereby, the display' s representation plane is made to conform to the depth, such that a discrepancy between convergence and accommodation of the viewer' s eyes can be obviated and an easy, intuitive and relaxed gazing minimising neurological efforts is facilitated.

In advantageous embodiments of a display system according to the present invention, the control unit is configured to update

operation of the plurality of pixels in accordance with at least one changed (current) eye position and/or eyes convergence point. That is, in such case, the control unit can modify the operation of the plurality of pixels (which may include modifying the underlying pattern as mentioned above) responsive to a change of a detected eye position and/or of a detected eyes convergence point, while

proceeding with autostereoscopically providing the respective content as a three-dimensional view. In particular, the eye-tracking camera is preferably configured to continuously detect the

respective eye position (s) and/or eyes convergence point, and the control unit is preferably configured to dynamically update the operation of the plurality of pixels accordingly.

Analogously, in advantageous embodiments, a method according to the present invention comprises detecting a changed eye position and/or eyes convergence point, and updating the operation of the plurality of pixels (in accordance with the changed eye position and/or eyes convergence point) while proceeding with autostereoscopically providing the content as the three-dimensional view. In particular, the method may comprise continuously detecting the eye position (s) and/or eyes convergence point and dynamically updating the operation of the plurality of pixels.

Thus, the autostereoscopic provision of the three-dimensional view can be dynamically (preferably in realtime) adapted to the varying eye(s) position and/or eyes convergence point, such that improper variations in the viewer' s experienee of the three-dimensional view resulting from movements of the viewer can be obviated even without employing multiview technique.

Various display techniques may be employed by the autostereoscopic screen. As particular examples, the screen may comprise a liquid crystal display (LCD) , a light emitting diode (LED) display, a (possibly transparent) organic light emitting diode (OLED) display and/or a plasma display. The pixels of the plurality of pixels may comprise various subpixels, and if so, the screen may employ subpixel rendering. In particular, operating the plurality of pixels may comprise individually controlling at least a portion of the respective subpixels. For providing three-dimensional views, the screen may implement various principles. For instance, it may employ the technique of holographic optical elements (HOE) like parallax barrier,

compressive light fields and/or lens arrays. According to

advantageous embodiments of the present invention, the screen comprises at least one lenticular lens system, whose effect (at the different points on the screen) to a viewer's eye(s) depends on a respective eye position and/or eyes convergence point. Said

operating the plurality of pixels based on the detected eye position and/or eyes convergence point may then comprise adjusting the operation of the plurality of pixels to an effect the at least one lenticular lens has at the detected eye position (s) and/or with the detected eyes convergence point. Thereby, the effect can be

specifically optimised for the respective current eye position and/or eyes convergence point, which facilitates providing a particularly favourable view. The lenticular lens system may preferably be attached to a panel comprising the plurality of pixels .

According to preferred embodiments of the present invention, the display system is adapted to selectively provide each of two- dimensional views and, autostereoscopically, three-dimensional views, successively or simultaneously (in different portions of the screen) . Indeed, in certain applications such as text-based

presentations, a two-dimensional view may be favourable over a 3-D view, as higher resolution implying improved sharpness is possible when autostereoscopy is omitted.

For instance, in embodiments where the screen comprises a lenticular lens system, said lenticular lens system may comprise at least one portion being selectively activatable (such that a 3-D view can be provided) and deactivatable (causing that only 2-D views can be provided) ; in the following, such lenticular lens system is called an "at least partially de- /activatable lenticular lens system". In embodiments with various such portions, two or more of them can preferably (de-) activated independently from each other. In embodiments comprising a lenticular lens system, this system may comprise an optoelectronic lens array, wherein respective lens properties may be adjustable by applying a respective voltage. In particular, by modifying a respectively applied voltage, a refractive property of at least one lens in a respective portion of the lens array may be selectively switched on (thus activating at least a portion of the lenticular lens system, so as to effect a 3-D view) or diminished/ increased (so as to provide a moderate 3-D effect of the view) or switched off (thus

deactivating at least a portion of the lenticular lens system so as to effect a 2-D view) .

Specifically, the optoelectronic lenses of such array may be based on liquid crystals whose orientation determines the refractive index of the respective lens and depends on a respectively applied voltage: A variation of the voltage then may cause a change of a crystal orientation and, therewith, of the refractive properties of the respective lens.

As a consequence of the switchability, the display device according to such embodiments has a wide scope of application. For example, it may be selectively usable to either provide a three-dimensional view, e.g. when operating a computer game or a 3-D video, or to provide a text editor or electronic book in a two-dimensional representation.

Analogously, an inventive method may further comprise switching between providing a two-dimensional view and autostereoscopically providing the three-dimensional view (from 2-D to 3-D and/or vice versa) .

In particular, the method according to the present invention may comprise activating and/or deactivating a de-/activatable lenticular lens system which may be comprised by the screen.

The activating may be performed before said operating the plurality of pixels so as to autostereoscopically provide a content as the three-dimensional view, and/or said deactivating may be performed after said operating the plurality of pixels.

A particular embodiment of a method according to the present invention further comprises providing a two-dimensional view on the screen (e.g., in a situation in which the lenticular lens system is deactivated) before and/or after autostereoscopically providing the three-dimensional view and/or simultaneously to autostereoscopically providing of the 3-D view in another area of the screen.

According to preferred embodiments of the present invention, the providing of a respective one of a two-dimensional view and a three- dimensional view is effectable/ effected automatically. For

instance, the display system (in particular the control unit thereof) may be configured to automatically select one of providing the two-dimensional or the three-dimensional view depending on a respective content to be provided and/or on a respective application program (such as a text editor or a media player) invoked to provide the content. Additionally or alternatively, the display system (e.g., the control unit thereof) may be adapted to select the respective view based on a respective user input.

The display system may comprise input means, such as a mouse and/or a keypad. According to preferred embodiments, the display system comprises a touch screen, which may be included in the screen. The display system may be adapted to receive user input by means of the input means, and to adjust the respective view in accordance with the user input. Analogously, an inventive method according to an embodiment analogously comprises receiving user input, and adjusting the respectively provided view based on the user input.

In particular, by means of the user input, the respective content may be modified. Consequently, the user can interact with the display system, such as playing a video game or performing computer aided design, wherein the result of the interaction may be

autostereoscopically provided as an adjusted three-dimensional view.

The eye-tracking system may preferably comprise one or more

(preferably two or more) cameras and an associated eye-tracking software module configured to evaluate image data recorded by the camera (s), so as to extract therefrom the three-dimensional eye position(s) and possibly (i.e., in respective embodiments) the eyes convergence point . According to advantageous embodiments, the eye-tracking system comprises at least one infrared light source (in particular an infrared LED) adapted to illuminate at least a portion of the viewer's face by infrared and thus unseen by the user. In these embodiments, at least one camera of the eye-tracking system is preferably sensitive to infrared light. Such infrared illumination may improve the detection of the eye position (s) and/or - in respective embodiments - of the eyes convergence point in particular in situations of poor light conditions.

Analogously, a method according to embodiments of the present invention, the step of detecting the eye position (s) and/or (in respective embodiments) the eyes convergence point may comprise illuminating at least a portion of a viewer' s face using at least one infrared light source.

The display system according to the present invention may be integrated in a computer system and/or an entertainment

arrangement, in particular in a flight entertainment system.

According to particularly preferred embodiments, the display system is included in a mobile device such as a notebook, a tablet computer or a smartphone . Thereby, the display system can be easily taken to respective environments and has a wide range of applicability.

Analogously, the method according to the present invention may preferably be executed by a mobile device, in particular by a mobile device according to an embodiment of the present invention.

The display system according to the present invention may

comprise (further to the at least one eye-tracking camera) a stereo camera module adapted to record image data such as a picture or a video scene. In this case, the three-dimensional view (the plurality of pixels can be operated to autostereoscopically provide) may be a three-dimensional live view of the respective recording.

Analogously, a method according to embodiments of the present invention may comprise recording image data (e.g., a picture or a video scene) by means of a stereo camera module and providing a three-dimensional live view of the respective recording.

Thereby, in particular generation of contents can be improved, as the respective result can be immediately controlled.

According to an exemplary advantageous embodiment, such stereo camera module may be combined or combinable with a microscope arrangement. Such embodiments facilitate provision of a three- dimensional view of a respective microscopical sample on the screen of the display means.

The display system according to the present invention may further comprise connection means (such as at least one USB port) facilitating dual input of external content (image data) to be provided, autostereoscopically as the three-dimensional or as a two-dimensional view, on the screen.

Analogously, a method according to the present invention may further comprise receiving, from an external data source, dual data input of external content. The content autostereoscopically provided as a three- dimensional view may then comprise the external content.

In the following, details of the present invention are explained with respect to the accompanying drawings. As is to be understood, the various elements and components are depicted as examples only, may be facultative and/or combined in a manner different than that depicted. Reference signs for related elements are used

comprehensively and not defined again for each figure.

Shown is schematically in

Fig. la: a portion of a screen comprising an activated lenticular lens system;

Fig. lb the portion of a screen of Fig. la, wherein the

lenticular lens system is deactivated; and Fig. 2: the architecture of a mobile device according to an exemplary embodiment of the present invention.

In Figures la and lb, a portion of an exemplary screen 100 which might be included in a display system according to the present invention is schematically shown in a cross section.

The screen 100 comprises a plurality of pixels 120 which each comprise respective subpixels. Attached to the panel including the plurality of pixels 120 is a lenticular lens system 110 comprising an optoelectronic lens array with a plurality of lenses 111 arranged side by side. The lenses are filled with liquid crystals 112 each having an orientation depending on a respectively applied voltage:

As schematically indicated in Figure la by an open switch 130, in the situation depicted, no voltage is applied to the optoelectronic lenses 111. As a consequence, the lenticular lens system 110 is activated, such that light rays L emerging from the plurality of pixels 120 are refracted by the lenses 111. The screen 100 is thus in a first operation mode adapted to autostereoscopically provide a three-dimensional view of a content.

By contrast, in the situation shown in Figure lb, the switch 130 is closed, such that a voltage is applied to the lenses 111. Thereby, the lenticular lens system 110 is deactivated, which results in that light rays L emerging from the plurality of pixels pass the lenses 111 without being refracted. The screen 100 is thus in a second operation mode appropriate for providing two-dimensional views .

The switching between the operation modes of the screen 100 and, therewith, of the display system including the screen, may be effected automatically based on a respective content to be provided or on respective application program invoked to provide the content. Additionally or alternatively, the switching may be effectable by a user input. In Figure 2, an exemplary mobile device 1 according to the present invention is shown; in the present case, the mobile device 1 is a tablet computer.

The mobile device 1 includes a display system 10 according to an exemplary embodiment of the present invention, the display system comprising a screen 100 which, in the explanatory Figure 2, is depicted both in a plan view as embedded in the mobile device 1 and - to illustrate its integration in the architecture of the display system 10 - as a functional entity comprising a plurality of pixels 120, a lenticular lens system 110 and a switch 130 for switching between a 2-D and a 3-D functionality.

Further comprised by the display system 10 is an eye-tracking system 200 with a first stereo camera module 210 including two cameras 211 each directed towards the viewer of the screen 100. The eye-tracking system 200 is configured to detect three dimensions of the current eye position (s) of one or both eyes of a viewer and, preferably, a current eyes convergence point. Therein, the respective detection is realised by employing an eye-tracking software module 220 associated with the first stereo camera module 210 and further comprised by the eye-tracking system 200.

In the exemplary embodiment depicted in Figure 2, the first stereo camera module 200 further comprises an infrared light source 201 (such as an LED) for illuminating at least a portion of viewer's face. As the cameras 211 (which in this case are preferably

sensitive to infrared light), also the infrared light source is associated with the eye-tracking software module 220. As mentioned above, in particular in situations of poor light conditions, such infrared light source 212 may improve image data recorded by the cameras 211 and, therewith, the detection of the three-dimensional eye position and/or (in respective embodiments) of the eyes

convergence point .

Moreover, the display system comprises a control unit 300 which is configured to operate the plurality of pixels 120 of the screen 100 based on at least one detected eye position and/or eyes convergence point, so as to autostereoscopically provide a respective content as a three-dimensional view. The control unit 300 is further configured to control the switch 130 for selecting whether a two-dimensional or a three-dimensional view of a respective content is to be provided.

The control unit 300 may interact with an application software 400 comprising internal data. Thereby, at least a portion of the content (e.g., content of one or more videos or video games) may be

generated or accessed.

Additionally or alternatively, the control unit 300 may interact with an application software 600 associated with a second stereo camera module 500 comprising cameras 501 (preferably directed away from the respective viewer) . The second camera module 500 may be configured and/or deployed in accordance with one or more of various possible applications. For instance, the cameras may be configured to capture a scene in the environment and/or a sample of a

microscope arrangement.

The software module 600 may be configured to process image data recorded by the second camera module 500, e.g. by means of an interactive application software the software module 600 may include. The processing may comprise adding virtual features and/or augmenting the recording (i.e., the recorded image data) with information data.

By the processing, at least a portion of the content to be

autostereoscopically provided on the screen 100 may be generated, According to a preferred embodiment, the three-dimensional view of the content may be autostereoscopically provided on the screen 100 simultaneously or at least almost simultaneously to the recording of the image data.

In the exemplary embodiment depicted in Figure 2, the software module 600 is further associated with a connection means 701 facilitating dual input of externa1 (stereo) content (image data) to be represented. In particular, the input means may be realised as USB ports. The software module 600 may be configured to process the external content, e.g. by means of an interactive application software, so as to generate content to be autostereoscopically provided on the screen 100 as a three-dimensional view. Again, the processing may comprise adding virtual features and/or augmenting the recorded data with information data.

To facilitate interaction of a viewer/user with one or more of the modules and/or units, the display system 10 preferably comprises input means (not shown) such as a touch-sensitive surface of the screen, a keypad and/or a computer mouse.

Disclosed is a display system 10 for providing three-dimensional views. The display system comprises a screen 100, at least one control unit 300 and at least one eye-tracking system 200 for detecting, in three dimensions, at least one eye position of a viewer of the screen. The control unit 300 is configured to operate the plurality of pixels 120 based on a detected three-dimensional eye position, so as to autostereoscopically provide a respective content as a three-dimensional view.

Further disclosed are a mobile device 1 comprising a display system 10 and a method for autostereoscopically providing three-dimensional views on a screen 100.

Reference Signs mobile device display system screen

lenticular lens system

optoelectronic lens

crystal pixel switch eye-tracking system

first stereo camera module

camera

infrared light source

eye-tracking software module control unit application software second stereo camera module

camera application software connection means facilitating dual input light rays