Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-ANGLE LIGHT-FIELD DISPLAY SYSTEM
Document Type and Number:
WIPO Patent Application WO/2019/135087
Kind Code:
A1
Abstract:
An image generation system (100) comprising a free-form mirror (140), a first field lens (130) and a light field display (120), wherein the light field display is configured to project a first 3D image (190) into an intermediate imaging volume between the light field display and the field lens, such that a second 3D image (180) is reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror, the second image being a real image corresponding to the first image.

Inventors:
YONTEM ALI (GB)
LI KUN (GB)
Application Number:
PCT/GB2019/050025
Publication Date:
July 11, 2019
Filing Date:
January 04, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CAMBRIDGE ENTPR LTD (GB)
International Classes:
G02B27/22; H04N13/307; G02B17/08
Domestic Patent References:
WO2015184409A12015-12-03
WO2018165123A12018-09-13
WO2018100002A12018-06-07
Foreign References:
US20170010473A12017-01-12
Other References:
MUNKH-UCHRAL ERDENEBAT, GANBAT BAASANTSEREN, JAE-HYEUNG PARK, NAM KIM, KI-CHUL KWON,YOUNG-HEE JANG AND KWAN-HEE YOO: "Full-parallax 360 degrees horizontal viewing integral imaging using anamorphic optics", SPIE, PO BOX 10 BELLINGHAM WA 98227-0010, USA, 15 February 2011 (2011-02-15), XP040553220, DOI: 10.1117/12.872673
ALI ÖZGÜR YÖNTEM ET AL: "Design for 360-degree 3D Light-field Camera and Display", IMAGING AND APPLIED OPTICS 2018, 25 June 2018 (2018-06-25), XP055562196, ISBN: 978-1-943580-44-6, Retrieved from the Internet [retrieved on 20190226], DOI: 10.1364/3D.2018.3Tu5G.6
CHEN GUOWEN ET AL: "360 Degree Crosstalk-Free Viewable 3D Display Based on Multiplexed Light Field: Theory and Experiments", JOURNAL OF DISPLAY TECHNOLOGY, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 12, no. 11, 1 November 2016 (2016-11-01), pages 1309 - 1318, XP011625003, ISSN: 1551-319X, [retrieved on 20161007], DOI: 10.1109/JDT.2016.2598552
HIROSHI TODOROKI ET AL: "Light field rendering with omni-directional camera", PROCEEDINGS OF SPIE, vol. 5150, 23 June 2003 (2003-06-23), 1000 20th St. Bellingham WA 98225-6705 USA, pages 1159 - 1168, XP055562286, ISSN: 0277-786X, ISBN: 978-1-5106-2099-5, DOI: 10.1117/12.503259
HSU CHE-HAO ET AL: "HoloTube: a low-cost portable 360-degree interactive autostereoscopic display", MULTIMEDIA TOOLS AND APPLICATIONS, KLUWER ACADEMIC PUBLISHERS, BOSTON, US, vol. 76, no. 7, 21 April 2016 (2016-04-21), pages 9099 - 9132, XP036204135, ISSN: 1380-7501, [retrieved on 20160421], DOI: 10.1007/S11042-016-3502-3
Attorney, Agent or Firm:
WITHERS & ROGERS LLP (GB)
Download PDF:
Claims:
CLAIMS

1 An image generation system for generating a three-dimensional image, the image generation system comprising:

a light field display for projecting a first 3D image,

a free-form mirror,

and a field lens for rendering a second image, said second image reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror,

wherein the second image is a real image corresponding to the first image, and

wherein the light field display projects the first 3D image in an intermediate imaging volume between the light field display and the field lens, such that second image appears three dimensional.

2. The system of claim 1 wherein the field lens is one of a hemispherical, spherical or ball lens or any combination of reciprocal/reversible optics.

3. The system of claim 1 wherein the field lens is provided by at least one Fresnel lens.

4. The system of claim 3 wherein the field lens is provided by a plurality of Fresnel lenses in a stacked configuration.

5. The system of any preceding claim wherein the light field display comprises a picture generation unit and a lens array.

6. The system of claim 5 wherein the picture generation unit comprises one of a laser scanner, a hologram generator and a pixelated display or a projector, wherein the projector comprises a light source and a spatial light modulator.

7. The system of claim 6 wherein the spatial light modulator is a digital micromirror device.

8. The system of any of claims 3-7 wherein the lens arrays comprise diffractive optical elements.

9. The system of any of any preceding claim further comprising an image processor in communication with the light-field display, wherein the image processor is configured to account for distortions caused by the optical set up such that the second image appears undistorted.

10. The system of any preceding claim further comprising monitoring means for tracking the hands, fingers, heads and eyes of users.

11. The system of any preceding claim wherein the free-form mirror is one of a convex mirror and a conical mirror. 12. The system of any preceding claim wherein the free-form mirror comprises a truncated hemispherical portion and a separate second convex portion.

Description:
MULTI-ANGLE LIGHT-FIELD DISPLAY SYSTEM

TECHNICAL FIELD

The present disclosure relates to a light field display system. Particularly, but not exclusively, the disclosure relates to apparatus for displaying multi-depth images.

BACKGROUND

3D displays have been studied extensively for a long time. Several key methods have been developed, such as holography, integral imaging/light field imaging and stereoscopy. However, many existing display systems are either bulky (e.g. 3D cinema) and require special eye-wear, or unsuitable for out-of-lab applications (e.g. holography).

Furthermore, to have a natural feeling of 3D perception, the systems should be glasses free, multi-view, and have a large viewing angle. Ideally, an observer should be able to view a 3D object from all angles, 360° all around. Most of the volumetric 3D displays, which can provide glasses-free images, are based on projection in a diffusive medium.

The limitations of conventional 3D display methods such as holography can be overcome by techniques such as integral imaging. Integral imaging is an alternative to holography and can provide 3D images under incoherent illumination leading to out-of-lab implementations. However, such a system still produces a coarse representation of the original 3D scene.

A light-field system can provide high resolution images, but current systems mostly feature the capture process. In addition, experimental light field 3D display systems only provide a limited viewing angle from a fixed point of view. Captured light field images using commercially available cameras can only be observed on a 2D display with computational refocusing. Experimental 3D light-field displays are extremely bulky setups which require multiple projection sources.

Similarly, in integral imaging/light field systems, the imaging planes are limited to a planar configuration. As such, objects are imaged from one plane to a second, parallel plane. Therefore, these systems can only provide a planar field of view with a fixed viewing angle. Furthermore, the 3D reconstruction will be within the range of the fixed focused plane defined in a depth of field of a rectangular volume. It is an aim of the present invention to overcome at least some of these disadvantages.

SUMMARY OF THE INVENTION

Aspects and embodiments of the invention provide apparatus as claimed in the appended claims.

According to an aspect of the invention there is provided an image generation system for generating a three dimensional image, the image generation system comprising a light field display for projecting a first 3D image, a free-form mirror, and a field lens for rendering a second image, said second image reflected by the free-form mirror and formed at a distance from the surface of the free-form mirror, wherein the second image is a real image corresponding to the first image, and wherein the light field display projects the first 3D image in an intermediate imaging volume between the light field display and the field lens, such that second image appears three dimensional.

This provides for a compact set up capable of generating a three-dimensional image for simultaneous viewing from multiple angles.

Optionally, the image generation system can work on its own to display 3D images by feeding digitally (numerically) generated light field or holographic images.

Optionally, the free-form mirror is a convex mirror.

Optionally, the free-form mirror is a conical mirror.

Optionally, the free-from mirror is a concave mirror.

Optionally, the field lens is one of a hemispherical, spherical or ball lens.

Optionally, the field lens comprises at least one Fresnel lens. In an embodiment a plurality of Fresnel lenses are provided in a stacked configuration. This results in a lower total focal length, providing a lower numerical aperture and thus a large imaging angle. This is especially useful when it is desirable that the light rays reach the edges of the free-form mirror.

Optionally, the light field display comprises a picture generation unit and a lens array. Optionally, the lens arrays comprise liquid crystal lens arrays.

Optionally, the picture generation unit comprises one of a laser scanner, a hologram generator, a pixelated display or a projector, wherein the projector comprises a light source and a spatial light modulator.

Optionally, the pixelated display comprises one of a OLED, QLED, a micro-LED display.

Optionally, the pixelated display comprises either a rectangular or a circular pixel configuration.

Optionally, the spatial light modulator is one of a digital micromirror device or a liquid crystal on silicon device.

Optionally, the image generation system further comprises an image processor in communication with the light-field display, wherein the image processor is configured to account for distortions caused by the optical set up such that the second image appears undistorted. This obviates the need for any post-image generation corrections as well as bulky correction optics. Furthermore, it provides a higher degree of flexibility which can adapt to different display/mirror surfaces.

Optionally, the image generation system further comprises a phased array of ultrasonic transducers configured to provide a tactile output corresponding to the dimensions of the second image. This provides haptic feedback to a user interacting with the displayed image, thereby improving the perceived reality of the displayed object. Said transducer array being flexible and reconfigurable so as to conform to the exact size shape of the image displayed.

Optionally, a tracking system, is used to track hand gestures, head, eye, hand and finger positions in order to increase the accuracy of the tactile system.

Optionally, the phased array of ultrasonic transducers is located around the periphery of the free-form mirror.

Other aspects of the invention will be apparent from the appended claim set. BRIEF DESCRIPTION OF THE DRAWINGS

One or more embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which:

Figure 1 is a schematic illustration of the image generation apparatus according to an aspect of the invention;

Figure 2 is a schematic illustration of the image generation apparatus according to an aspect of the invention;

Figure 3 is a schematic illustration of the image generation apparatus according to an aspect of the invention;

Figure 4 is a schematic illustration of the image generation apparatus according to an aspect of the invention;

Figure 5 is a schematic illustration of a portion of the image generation apparatus according to an aspect of the invention;

Figure 6 is a schematic illustration of the image generation apparatus according to an aspect of the invention;

Figure 7 is a schematic illustration of the image generation apparatus according to an aspect of the invention;

Figure 8 is a schematic illustration of a portion of the image generation apparatus according to an aspect of the invention;

Figure 9 is a schematic illustration of the free-form mirror according to an aspect of the invention;

Figure 10 is a schematic illustration of the free-form mirror according to an aspect of the invention;

Figure 11 is a schematic illustration of the image generation apparatus according to an aspect of the invention; Figure 12 is a schematic illustration of the field lens according to an aspect of the invention;

Figure 13 is a schematic illustration of the free-form mirror according to an aspect of the invention;

Figure 14 is a schematic illustration of the free-form mirror according to an aspect of the invention;

Figure 15 is a schematic illustration of the image generation apparatus according to an aspect of the invention; and

Figure 16 is a flow chart according to an aspect of the invention.

DETAILED DESCRIPTION

Particularly, but not exclusively, the disclosure relates to an apparatus for projecting multi dimensional true 3D images. The system can further be configured to provide 3D augmented reality images. Example applications can be, but are not limited to, automotive, telepresence, entertainment (gaming, 3DTV, museums and marketing), and education.

Figure 1 shows an image generation system 100, made up of a light field display 120, a field lens 130, and a free-form mirror 140.

The light field display 120 is formed by a 2D display device 121 and a lens array 122. In an embodiment, the 2D display device 121 is an LCD screen (either a single device or multiple, tiled devices). In a further embodiment, the 2D display device 121 comprises a circular pixel configuration instead of the conventional rectangular configuration, wherein the relevant scaling and image generation is achieved by known image processing means and processes. In an alternative embodiment, the 2D display device 121 comprises a scanning mirror and a digital micromirror device (DMD), or a liquid crystal on silicon (LCoS) device though the skilled person would appreciate that any suitable light source and imaging means (including 3D holographic display devices) may be used provided they were capable of operating in the manner described below. The lens array 122 comprises an array of diffractive optical elements (DOE) such as photon sieves). In a further embodiment, the lens array 122 is provided by an array of liquid crystal lens arrays. In a further embodiment, the lens array 122 is provided by reconfigurable DOE.

In an alternative embodiment, the lens array 122 comprises phase Fresnel lens patterns on phase-only LCoS. In an alternative embodiment, the lens array 122 comprises amplitude Fresnel lens patterns on a digital micromirror device (DMD) or amplitude only LCoS. In an alternative embodiment, the lens array 122 comprises conventional lenses. The skilled person would appreciate that any suitable image generation means, and lens array may be employed to provide the light field display 120.

The field lens 130 may be provided by any form of suitable lens, including, but not limited to, a hemispherical, spherical or ball lens.

In an embodiment, the field lens 130 comprises at least one Fresnel lens. In a particular embodiment, a plurality of Fresnel lenses are provided in a stacked configuration, as shown in Figure 12.

In the illustrated embodiment, the mirror 140 is a hemi-spherical, parabolic convex mirror, with a 360° field of view. The skilled person would appreciate that any curved or multi-angled reflective surface may be employed, and that the field of view of the mirror 140 need not be limited to 2p steradians. As a result, the volume around the mirror in which a resulting 3D image is displayed can be cylindrical, spherical, conical or any arbitrary shape.

In an embodiment, the free-form mirror 140 is formed by a Fresnel lens 142 on top of a flat surface mirror 141 , as shown in Figure 13. This provides a parabolic mirror to be simulated by a setup which beneficially has a thinner form factor.

In a further embodiment, the free-form mirror 140 is formed by a holographic reflective plate 143 with an equivalent phase profile encoded, as shown in Figure 14.

The path of the light through the system is referred to as the optical path. The skilled person would understand that in further embodiments, any suitable number of intervening reflectors/lens or other optical components are included so as to manipulate the optical path as necessary (for example, to minimize the overall size of the image generation system 100). In use, a series of 2D perspective images (elemental images) 192 are displayed on the 2D display 121 and each of the 2D perspective images are imaged through a corresponding lens of the lens array 122, such that an intermediate 3D image 190 is formed in an imaging volume between the light field display 120 and the field lens 130. This image is relayed through the field lens 130 before being reflected by the free-form mirror 140 towards one or more users 2 who in turn see a reconstructed real 3D image 180 projected at distance from the free-form mirror surface.

Figure 2 shows an embodiment of the image generation system 100 in which a spatial light modulator (SLM) 125 is used in place of the 2D display 121. The remaining components and their arrangement are otherwise identical to those described above in relation to Figure 1 , the common reference numerals of Figures 1 and 2 referring to the same components. Accordingly, a series of holographic 3D elemental images 193 are generated for transmission through the lens array 122, in place of the 2D perspective images 192.

The resulting 3D image(s) is able to be displayed in different ways. Figure 3 shows the different ways a 3D image can be projected via the mirror 140 of the image generation system 100 of Figures 1 and 2. Whilst the field lens 130 and light field display 120 of the system 100 are not shown, they are arranged as described with reference to Figure 1. A large, single image can be reconstructed around the mirror 140 such that an observer can view different parts of the same 3D image. In an alternative embodiment, multiple different 3D images can be displayed around the mirror 140. Observers will view different objects at different locations around the mirror. In a further embodiment, different observers can view the same portion of a common object.

Figures 4 and 5 show a further embodiment of the image generation system 100 which includes a phased array of ultrasonic transducers 150 arranged around the periphery of the free-form mirror 140. Whilst Figure 4 does not depict the display 121 , it is present and arranged as depicted in either of Figures 1 and 2, the phased ultrasonic array 150 being compatible with both the 2D display device and the holographic display device embodiments.

The phased array may be provided by any suitable means. In a preferred embodiment the phased array is an array of ultrasonic transducers 150. The phased array is configured to provide haptic feedback to the user in order to allow the user to interact with the displayed object. In Figure 5, whilst the field lens 130 and light field display 120 of the system 100 are not shown, they are arranged as described with reference to Figures 1 and 2. Accordingly, the embodiment of Figures 4 and 5 is identical to that of Figures 1 and 2 with the addition of the phased ultrasonic array 150, with the common reference numerals of Figures 1 , 2, 4 and 5 referring to the same components. Again the phased array may be provided by any suitable means.

In use, the phased ultrasonic array 150 generates a pressure field configured to emulate the physical sensation of the 3D object being displayed thus providing haptic feedback to the user. Accordingly, the system 100 provides both visual and sensational cues of the reconstructed image. In a further embodiment, a synchronised sound system is used to generate a 3D surrounding and directional audible acoustic field. Such a system allows for the three components of the human sensory system (namely vision, somatosensation and audition) to be stimulated such that it is possible to completely simulate the sensation and interaction with physical objects.

Figure 6 and 7 show an embodiment of the invention in which the image generation system 100 is installed in a vehicle.

For simplicity, only the mirror 140 and the phased ultrasonic array 150 of the system are pictured. Whilst the field lens 130 and light field display 120 of the system 100 are not shown, they are arranged as described with reference to Figure 1.

In both Figures 6 and 7 the multi-direction projection capabilities of the image generation system 100 enable a first real 3D image 180a to be generated at a first region and displayed towards the driver 2a of the vehicle, whilst a second real image 180b is generated at a second region and displayed for the front passenger 2b. At the same time, virtual images 180c and 180d are displayed to the driver 2a and passenger 2b respectively by projecting images onto the windscreen 199 of the vehicle which are then reflected back towards the driver 2a and passenger 2b. Additional projections optics (a combination of mirrors and lenses) between the windscreen 199 and the mirror 140 can be used to scale the size of the virtual images. Accordingly, the image generation system operates as a head-up display (HUD). In an embodiment, the real images will be in focus at an apparent depth within reach of the observer, whereas the virtual images will be seen as augmented/mixed reality images overlaid on physical objects outside the vehicle. This allows the occupants in the front seats to observe HUD images and infotainment images and interact with them. The phased array, such as ultrasonic array 150, generates a pressure field configured to emulate the physical sensation of the 3D object being displayed to provide haptic feedback. Accordingly, the system 100 provides both visual and sensational cues of the reconstructed image. Though it is not shown in the figures, it is envisaged that the synchronised sound system described in relation to Figure 5 is optionally incorporated in to the vehicle.

Figure 7 shows a further embodiment with multiple image generation systems 100 for projecting real images 180e and 180f to passengers 2c and 2d in the back seats of the vehicle. This embodiment functions in the manner described above.

Figures 9 and 10 depict an alternative configuration for the free-form mirror and field lens. The illustrated mirror is formed by two sub-mirrors 401 and 402. A first section 401 is convex, whilst a second section 402 is a truncated hemisphere. Though the mirror pair can assume any matching surface shape to create the required geometry for the 3D image reconstruction.

In use, intermediate 3D image 190 is relayed through the field lens 130 and is incident on the first mirror section 401 before being reflected towards the second section 402 which redirects the light towards so as to form a reconstructed real 3D image 180 projected at distance from the free-form mirror surface.

This arrangement reduces the size of the system by allowing optical components to be at least partially accommodated in the volume defined by the curve of the first mirror section 401.

Figure 11 depicts a further embodiment of the image generation system 100 in which the lens array 122 is removed and replaced with a second lens array 123 which surrounds the free-form mirror 140.

The illustrated embodiment is compatible with both the 2D display device 121 of Figure 1 and the SLM 125 of Figure 2.

When the light field display 120 is configured to generate a series of 2D perspective images 192, a diffusive screen 160 is positioned around the periphery of the mirror 140. In use, the 2D perspective images are formed on the diffusive screen before being relayed through the second surrounding lens array 123 to generate the real 3D images 180. While the depicted diffusive screen 160 is cylindrical, any suitable shape may be used. When the light field display 120 utilised a SLM 125 to generate a series of 3D perspective images 193, no diffusive screen is employed.

Figures 15 and 16 depict a further embodiment of the image generation system 100 which includes an interactive hand tracking system 500. Whilst Figure 15 does not depict the display 121 , 125, it is present and arranged as depicted in either of Figures 1 and 2, the hand tracking system 500 being compatible with both the 2D display device and the holographic display device embodiments.

The interactive hand tracking system 500 comprises a controller 510 in communication with the 2D display device 225 (or its equivalent) and the phased ultrasonic array 150. The controller 510 includes an image processing unit configured to recognise and track a user’s hands in a known manner.

In use, the position and movement of one or more hands is captured by controller 510, whilst the display system 100 projects the relevant 3D object with which the user interacts. In an embodiment, the interactive hand tracking system 500 is used in conjunction with the phased ultrasonic array 150 in order to provide the sensation of tactile feedback corresponding to the displayed object and the detection of the user’s hands.

The process of recognising the user’s hands, determining their position and prompting the appropriate response is carried out by a controller 510 in a known manner.

Figure 16 sets out an example of the operational steps of the interactive hand tracking system 500.

In step S501 , the position of the user’s hand is recorded by the controller 510 using any known suitable means. In an embodiment, the controller 510 includes a camera that feeds into the image processing unit.

At step S502, the background is removed and the general shape of the hand is determined by known background removal methods performed at the controller 510.

At step S503, the overall handshape is registered. Individual feature points of the hand image are identified and extracted for analysis. In an embodiment, this is achieved by comparing the extracted image of the hand to a database of known hand gestures accessible by the controller 510.

At step S504, the individual portions of the hand, including the fingers are recognised via standard edge detection and shape recognition techniques that would be apparent to the skilled person. In an embodiment, recognisable gestures are stored as a series of feature points in the database. Detectable gestures include a button press, swipe, pick, pull, pinch zoom. The recorded feature points of the handshape are then compared to those in the database so as to identify the most likely gesture being performed.

At step S505, the controller is trained to better recognise points on hands and fingers using a preferably real-time machine learning/deep learning algorithm. In an embodiment, the controller 510 comprises various models of hand and finger shapes which are cross- correlated with observations of the user’s hand and a database of images of known hand and finger positions so as to enable identification and tracking of fingers and hands.

At step S506 the position of the hands within the 3D volume around the mirror is calculated. The determination of the hand’s exact location is performed by the controller 510 in a known manner. In an embodiment, multiple perspective images of the hand are be used to determine the 3D locations of the points on the hand uniquely. In an embodiment, a plurality of known scanning means are used to determine their respective distance from the hand, thereby providing its location in multiple dimensions. In another embodiment, the hand location is estimated from the observed size of the hand as compared to one or more images of a hand at a known distance stored in memory.

At step S507, the position of the hands is correlated with the known virtual position of the one or more displayed 3D objects.

At step S508, the appropriate visual, haptic and audio feedback is presented to the user. The controller 510 being configured to adjust the shape, size and/or orientation of the displayed 3D objects and the output of the phased ultrasonic array to respond to the detected position and movements of the user’s hand. CROSSTALK

Figure 8a depicts a conventional display device 121 arranged relative to a lens array 122 as described in the image generation system 100. For simplicity, only a single lens of the lens array 122 is shown.

The image generation process relies on light from one portion of the display device 121 passing through a single corresponding lens of the lens array 122. If light from one pixel leaks into a neighbouring lens in the array (as shown in Figure 8a), this creates aliases around the generated 3D image.

It is known to place a baffle array between the lens array 122 and the display device 121 to block the light from neighbouring regions.

In an alternative embodiment depicted in Figure 8b, a holographic plate 350 is used to redistribute the light coming from the backlight 300 and display device 121 such that, every elemental image appears behind its corresponding lens in the array 122. This replaces the baffle (which is bulky and hard to manufacture) with a thin plate of diffractive optical element.