Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MULTI-PLANAR VOLUMETRIC REAL TIME THREE-DIMENSIONAL DISPLAY AND METHOD OF OPERATION
Document Type and Number:
WIPO Patent Application WO/2017/055894
Kind Code:
A1
Abstract:
A multi-planar volumetric display system and method of operation generate volumetric real- time three-dimensional images using a multi-surface optical device including a plurality of individual optical elements arranged in an array; an image projector for selectively projecting images on respective optical elements to generate a first volumetric three- dimensional image viewable in the multi-surface optical device; and a floating-image generator for projecting the first volumetric three-dimensional image to generate a second volumetric three-dimensional image viewable as floating in space at a location separate from the multi-surface optical device. The optical elements include liquid crystal elements having a controllable variable translucency. An optical element controller controls translucency of the liquid crystal elements, such that a single liquid crystal element is controlled to have an opaque light-scattering state to receive and display an image from the image projector; and the remaining liquid crystal elements are transparent to allow viewing of the displayed image.

Inventors:
OSMANIS ILMARS (LV)
OSMANIS KRIŠS (LV)
VALTERS GATIS (LV)
Application Number:
PCT/IB2015/057484
Publication Date:
April 06, 2017
Filing Date:
September 30, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
LIGHTSPACE TECH SIA (LV)
International Classes:
G02B27/22; H04N13/395
Foreign References:
US6100862A2000-08-08
US5552934A1996-09-03
US5572375A1996-11-05
USPP8244298P1998-04-20
US74348396A1996-11-04
US5090789A1992-02-25
US47229815A
Other References:
MARTIN YELLIN, SPIE CONFERENCE PROCEEDINGS, vol. 75, 1976, pages 97 - 102
Attorney, Agent or Firm:
ZARDS, Peteris (LV)
Download PDF:
Claims:
1. A system for generating volumetric three-dimensional real-time images in space, the system comprising: a multi-surface optical device including a plurality of individual optical elements or in particular planar liquid crystal elements arranged in an array having a controllable variable translucency;

an image projector for selectively projecting a set of images as two-dimensional slices of a three-dimensional real-time image onto respective liquid crystal elements to generate a first volumetric three-dimensional real-time image viewable in the multi-surface optical device a floating-image generator or a heads up projector or augmented reality imaging device for projecting the first volumetric three-dimensional real time image from the multi-surface optical device to generate a second volumetric real-time three-dimensional image viewable as floating in space at a location separate from the multi-surface optical device.

2. The system of claim 1 wherein each of the plurality of individual optical elements of the multi-surface optical device includes a liquid crystal element having a controllable variable translucency, and they are stacked in a linear or non-linear array forming the multi-planar optical device.

3. The system of claim 1 wherein at least one of the plurality of planar liquid crystal elements is a curved surface for receiving and displaying a respective image.

4. The system of claim 2 further comprising: an optical element controller for controlling the translucency of the liquid crystal elements wherein: a single liquid crystal element is controlled to be synchronized with the output of a respective one of the set of images from the image projector for the single liquid crystal element to have an opaque light-scattering state to receive and display the respective one of the set of images from the image projector; and the remaining liquid crystal elements are controlled in order to be synchronized with the output of the respective one of the set of images in order to be substantially transparent to allow the viewing of the displayed image on the opaque liquid crystal element.

5. The system of claim 4 wherein the optical element controller rasters through the liquid crystal elements at a high rate during a plurality of imaging cycles to select one liquid crystal element therefrom to be in the opaque light-scattering state during a particular imaging cycle, whereby the optical element controller causes the opaque light-scattering state to move through the liquid crystal elements for successively receiving the set of images and for generating the volumetric three-dimensional real-time images with three-dimensional depth, wherein by updating three dimensional projecting data sets with a sufficient update rate of at least 25 (Hz) volumetric frames per second; wherein, the image is perceived as a time continuous moving three dimensional volumetric image to the observer.

6. The system of claim 1 wherein the image projector projects the set of images into the multi-surface optical device to generate the entire first volumetric three-dimensional realtime image in the multi-surface optical device at a rate greater than 35Hz to prevent human- perceivable image flicker.

7. The system of claim 6 wherein the multi-surface optical device includes about N optical elements selected in order to fulfill required correct image depth resolution for given application; wherein the image projector projects each of the set of images onto a respective optical element at a rate of at least N x 35 to prevent human perceivable image flicker with each optical element having transfer resolution X by selected in order to fulfill required image transfer image resolution thereby forming multi-planar optical device to have N X x ^ physically addressable voxels.

8. The system of claim 1, wherein the image projector includes:

a projection lens for outputting the set of images; and

an adaptive optical focusing system for focusing each of the set of images on the respective optical elements to control the resolution and depth of the projection of the set of images from the projection lens; or the use of a projection lens that is designed to provide sufficient focus resolution for the full depth of the multilayer screen;

a reduction of display unit physical volume which is achieved by using a flat multi-mirror relay projection system or a curve-shaped mirror projection system that provides improved collimation of projected image at shorter projection throw distance .

9. The system of claim 1, wherein the image projector includes:

a plurality of laser light sources, high power LED light sources or white light source with respective color filters or filter wheel for projecting red, green and blue light, respectively, to generate:

a combined image consisting of required spectrum components each modulated by a separate spatial light modulator providing at least a four bit modulation depth per color. a combined image consisting of the required spectrum components all modulated sequentially by a single spatial light modulator providing at least a four bit modulation depth per color at a three times greater modulation rate;

in order to project the set of images onto the plurality of optical elements in a plurality of colors.

10. A method for generating volumetric three-dimensional real-time images, the method comprising the steps of: providing image data corresponding to a set of two-dimensional slices of a three-dimensional image to an image projector;

selectively projecting each of the two-dimensional slices from the image projector onto a respective liquid crystal element selected from a plurality of liquid crystal elements forming a multi-surface optical device in order to generate a first volumetric three-dimensional realtime image viewable in the multi-surface optical device; and

projecting the first volumetric three-dimensional real-time image from the multi-surface optical device using a floating image generator or a heads up projector or augmented reality imaging system in order to generate a second volumetric three-dimensional real-time image viewable as floating in space at a location separate from the multi-surface optical device; controlling the translucency of each of the plurality of individual optical elements of the multi-surface optical device using an optical element controller;

controlling includes the steps of: causing a single liquid crystal element to have an opaque light-scattering state; and causing the remaining liquid crystal elements to have a translucency to allow the set of images to be respectively produced thereon;

Wherein the step of controlling further includes the steps of:

rastering through the liquid crystal elements at a high rate during a plurality of imaging cycles;

selecting one liquid crystal element therefrom to be the single liquid crystal element in the opaque light-scattering state during a particular imaging cycle, causing the opaque light- scattering state to move through the liquid crystal elements;

synchronizing the projection of respective images in order to be on the corresponding single liquid crystal element in the opaque light-scattering state;

generating the volumetric three-dimensional real-time image to have three-dimensional depth using the synchronized projected images on respective liquid crystal elements in the opaque state; and

updating three dimensional projecting data sets with a sufficient update rate of at least 35 Hz volumetric frames per second that is perceived as a flicker-free, time continuous, moving, three-dimensional, volumetric image to the observer.

Description:
MULTI-PLANAR VOLUMETRIC REAL TIME THREE-DIMENSIONAL DISPLAY

AND METHOD OF OPERATION

CROSS-REFERENCE TO RELATED

APPLICATIONS

This application is related to co-pending US. Provisional

Patent application Ser. No. 60/082,442, filed Apr. 20, 1998.

This application is also related to co-pending US. Patent

Application Ser. No. 08/743,483, filed Nov. 4, 1996, which

is a continuation-in-part of US. Patent application Ser. No.

08/152,861, filed Nov. 15, 1993 (now US. Pat. No. 5,572,

375); which is a continuation-in-part of US. Patent application

Ser. No. 07/840,316, filed Feb. 24, 1992 (now US.

Pat. No. 5,311,335, issued May 10, 1994); which is a division of US. Patent application Ser. No. 07/562,271, filed

Aug. 3, 1990 (now US. Pat. No. 5,090,789, issued Feb. 25, 1992). This application is also related to co-pending US.

Application Ser. No. 09/004,722, filed Jan. 8, 1998.

BACKGROUND OF THE INVENTION

The present invention relates to three-dimensional imaging, and, more particularly, to a multi-planar display system for generating volumetric three-dimensional images in space.

It is known that three-dimensional (3D) images may be generated and viewed to appear in space. Typically, specialized eyewear such as goggles and/or helmets are used, but such eyewear can be encumbering. In addition, by its nature as an accessory to the eyes, such eyewear reduces the perception of viewing an actual 3D image. Also, the use of such eyewear may cause eye fatigue which is remedied by limiting the time to view the image, and such eyewear is often bulky and

uncomfortable to wear.

Thus, there is a need to generate volumetric 3D images and displays without the disadvantages of using such eye-wear.

Other volumetric systems generate such volumetric 3D images using, for example, self-luminescent volume elements, that is, voxels. One example is the system of 3D Technology Laboratories of Mountain View, Calif., in which the intersection of infrared laser beams in a solid glass or plastic volume doped with rare earth impurity ions generates such voxel- based images. However, the non-linear effect that creates visible light from two invisible infrared laser beams has a very low efficiency of about 1%, which results in the need for powerful lasers to create a bright image in a large display. Such powerful lasers are a potential eye hazard requiring a significant protective enclosure around the display. Additionally, scanned lasers typically have poor resolution resulting in low voxel count, and the solid nature of the volumetric mechanism results in large massive systems that are very heavy. Another volumetric display system from Actuality Systems, Inc. of Cambridge, Mass., uses a linear array of laser diodes that are reflected off of a rapidly spinning multifaceted mirror onto a rapidly spinning projection screen. However, such rapidly spinning components, which may be relatively large in size, must be carefully balanced to avoid vibration and possibly catastrophic failure. Additionally, the size, shape, and orientation of voxels within the display depends on their location, resulting in a position-dependent display resolution.

Another volumetric display system is provided by Neos Technologies, Inc., of Melbourne, Fla., which scans a laser beam acousto-optically onto a rapidly spinning helical projection screen. Such a large spinning component requires a carefully maintained balance independent of display motion. The laser scanner system has poor resolution and low speed, drastically limiting the number of voxels. Additionally, the size, shape, and orientation of voxels within the display depends on their location, resulting in a position dependent display resolution. Finally, the dramatically non-rectilinear nature of the display greatly increases the processing requirements to calculate the different two-dimensional images .

Other types of 3D imaging system are known, such as stereoscopic displays, which provide each eye with a slightly different perspective view of a scene. The brain then fuses the separate images into a single 3D image. Some systems provide only a single viewpoint and require special eyewear, or may perform head tracking to eliminate eyewear but then the 3D image can be seen by a single viewer only.

Alternatively, the display may provide a multitude of viewing zones at different angles with the image in each zone appropriate to that point of view, such as multi-view autostereoscopic displays. The eyes of the user must be within separate but adjacent viewing zones to see a 3D image, and the viewing zone must be very narrow to prevent a

disconcerting jumpiness as the viewer moves relative to the display. Some systems have only horizontal parallax/look-around. In addition, depth focusing-convergence disparity may rapidly lead to eyestrain that strongly limits viewing time. Additionally, stereoscopic displays have a limited field of view and cannot be used realistically with direct interaction technologies such as virtual reality and/or a force feedback interface .

Head mounted displays (HMD) are typically employed in virtual reality applications, in which a pair of video displays present appropriate perspective views to each eye. A single HMD can only be used by one person at a time, and provide each eye with a limited field of view. Head tracking must be used to provide parallax.

Other display systems include holographic displays, in which the image is created through the interaction of coherent laser light with a pattern of very fine lines known as a holographic grating. The grating alters the direction and intensity of the incident light so that it appear to come from the location of the objects being displayed.

However, a typical optical hologram contains an enormous amount of information, so updating a holographic display at high rates is computationally intensive. For a holographic display having a relatively large size and sufficient field of view, the pixel count is generally greater than 250 million.

Accordingly, a need exists for high quality volumetric 3D imaging with computationally acceptable demands on processing systems and which has improved viewability and implementation.

SUMMARY OF THE INVENTION

A multi-planar volumetric display (MVD) system and method of operation are disclosed which generate volumetric three-dimensional images. The MVD system includes a multi-surface optical device including a plurality of individual optical elements arranged in an array; an image projector for selectively projecting a set of images on respective optical elements of the multi-surface optical device to generate a first volumetric three-dimensional image viewable in the multi-surface optical device; and a floating-image generator for projecting the first

volumetric three-dimensional image from the multi-surface optical device to generate a second volumetric three-dimensional image viewable as floating in space at a location separate from the multi-surface optical device .

Each of the plurality of individual optical elements of the multi- surface optical device includes a liquid crystal element having a controllable variable translucency . An optical element controller is also provided for controlling the translucency of the liquid crystal elements, such that a single liquid crystal element is controlled to have an opaque light scattering state to receive and display the respective one of the set of images from the image projector; and the remaining liquid crystal elements are controlled to be substantially transparent to allow the viewing of the displayed image on the opaque liquid crystal element.

The optical element controller rasters through the liquid crystal elements at a high rate during a plurality of imaging cycles to select one liquid crystal element therefrom to be in the opaque light- scattering state during a particular imaging cycle, and to cause the opaque light-scattering state to move through the liquid crystal elements for successively receiving the set of images and for generating the volumetric three-dimensional images with three-dimensional depth.

The image projector projects the set of images into the multi-surface optical device to generate the entire first volumetric three-dimensional image in the multi-surface optical device at a rate greater than 35 Hz to prevent human-perceivable image flicker. For example, the volume rate may be about 40 Hz. In one embodiment, for example, if about 50 optical elements are used with a volume rate of about 40 Hz, the image projector projects each of the set of images onto a respective optical element at a rate of 2 kHz.

The image projector includes a projection lens for outputting the set of images. The projector also includes an adaptive optical focusing system for focusing each of the set of images on the respective optical elements to control the resolution and depth of the projection of the set of images from the projection lens. Alternatively or in addition, the image projector includes a plurality of laser or power LED light sources for projecting red, green, and blue light, respectively, to generate and project the set of images in a plurality of colors.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 illustrates the disclosed multi-planar volumetric display system; FIG. 2 illustrates an adaptive optics system used in an image projector; FIG. 3 illustrates a flow chart of a method for inducing a multi-planar dataset .

FIG. 4 illustrates a voice coil driver based on an adaptive optics system.

FIG. 5 illustrates a multi-mirror relay and a flat mirror projection system.

FIG. 6 illustrates a multi-mirror relay and a curve-shaped mirror projection system.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

Referring now to FIG. 1, a multi-planar volumetric display system 10 is provided which generates three dimensional (3D) real-time images which are volumetric in nature, that is, the 3D images occupy a definite and limited volume of 3D space, and so exist at the location where the images appear. Thus, such 3D images are true 3D, as opposed to an image perceived to be 3D due to an optical illusion of vision such as by stereographic methods.

The 3D images generated by the system 10 can have a very high resolution and can be displayed in a large range of colors, and so can have the characteristics associated with viewing a real object. For example, such 3D images may have both horizontal and vertical motion parallax or look- around, allowing the viewer 12 to move yet still receive visual cues to maintain the 3D appearance of the 3D images.

In addition, a viewer 12 does not need to wear any special eyewear such as stereographic visors or glasses to view the 3D image, which is advantageous since such eyewear is encumbering, causes eye fatigue, etc. Furthermore, the 3D image has a continuous field of view both

horizontally and vertically, with the horizontal field of view equal to 360° in certain display configurations. Additionally, the viewer can be at any arbitrary viewing distance from the MVD system 10 without loss of 3D perception.

The multi-planar volumetric display system 10 includes an interface 14 for receiving 3D image data from the image data source Fig. 16B, such as a computer which may be incorporated into the system 10, or which may be operatively connected to the system 10 through communications channels from, for example, a remote location and connected over conventional telecommunications links or over any network such as the Internet. The interface 14 may be a PCI bus, PCI express bus or the Thunderbolt interface from INTEL of Santa Clara, Calif. Other interfaces may be used such as High Definition Multimedia Interface (HDMI) of HDMI Licensing, LLC, DisplayPort of the Video Electronics Standards Association (VESA) And IEEE 802.3bj 100 Gb/s Backplane (Ethernet) as well as open or proprietary interfaces.

The interface 14 passes the 3D image data to a multi-planar volumetric display (MVD) controller 18, which includes image data and timing controller (IDT) 18A, multi-layer screen controller 18B, RGB light source control 18C, and spatial light modulator controller 18D.

The three-dimensional image to be viewed as a volumetric 3D image is converted by the IDT controller 18A into a series of two-dimensional image slices at varying depths through the 3D image. The frame data corresponding to the image slices may then be rapidly output by the SLM controller 18D to the image projector 20.

The MVD controller 18 and the interface 14 may be implemented in a high speed processing computer, such as multi-core NVIDIA graphic processing unit cards. Other implementation may also be custom designed hardware controller based on a high switching speed (over 1 GHz) application specific integrated circuit (ASIC) or programmable devices such as field programmable gate array (FPGA) with at least 200k logic gates and embedded high speed serial transceivers. Accordingly, it is to be understood that the disclosed MVD system 10 and its components are not limited to a particular implementation or realization of hardware and/or software .

The image data source 16B may optionally be a 3D image application program of a computer which operates an application program interface (API) and image transformation engine ITE 16A for providing the 3D image data in an appropriate format to the MVD controller 18 through an input/output ( I/O) device such as the interface 14. The MVD controller 18 may be hardware and/or software, for example, implemented in a personal computer and optionally using expansion cards for specialized data processing .

For example, an expansion card in the MVD controller 18 may include graphics hardware and/or software for converting the 3D dataset from the graphics data source 16B into the series of two-dimensional image slices forming a multi-planar dataset corresponding to the slices 24—30. Thus, the 3D image 34 is generated at real-time or near-real-time update rates for real world applications such as real time surgical imaging or surgical simulation, medical diagnostics, air traffic control, remotely operated vehicle (ROV) , unmanned aviation vehicle (UAV) or military command and control. Such expansion cards may also include an image transformation engine (ITE) 16A for manipulating 3D datasets and texture memory for doing texture mapping of the 3D images.

Prior to transmission of the image data to the image projector 20, the MVD controller 18 or alternatively the graphics data source 16B may perform 3D anti-aliasing on the image data to smooth the features to be displayed in the 3D image 34, and so to avoid any jagged lines in depth, for example, between parallel planes along the z-direction, due to display pixelization caused by the inherently discrete voxel

construction of the MOE device 32 with the optical elements 36—42 aligned in x-y planes normal to a Z-axis. As the data corresponding to the image slices 24—30 is generated, an image element may appear near an edge of a plane transition, that is, between optical elements, for example, the optical elements 36—38. To avoid a jagged transition at the specific image element, both of slices 24, 26 may be generated such that each of the images 44—46 includes the specific image element, and so the image element is shared between both planes formed by the optical elements 36—38, which softens the transition and allows the

3D image 34 to appear more continuous. The brightness of the image elements on respective consecutive optical elements is varied in accordance with the location of the image element in the image data.

The graphics data source 16B and the MVD controller 18 may also perform zero-run encoding through the interface 14 in order to maximize the rate of transfer of image data to the MVD controller 18 for image generation. It is to be understood that other techniques for transferring the image data may be employed, such as the Motion Picture Experts Group (MPEG) data communication standards as well as delta (Δ) compression.

A 3D image may contain on the order of 50 SVGA resolution images updated at a rate of 40 HZ, which results in a raw data rate of more than 2 GB/sec. to be displayed. Such a raw data rate may be significantly reduced by not transmitting zeros. A volumetric 3D image is typically represented by a large number of zeros associated with the inside of objects and surrounding empty space. The graphics data source 16B may encode the image data such that a run of zeroes is represented by a zero-run flag (ZRF) or zero-run code, and followed by or associated with a run length. Thus, the count of the zeros may be sent for display without sending the zeroes. A 3D data image buffer in the MVD controller 18 may be initialized to store all zeroes, and then as the image data is stored in the image buffer, a detection of the ZRF flag causes the MVD controller 18 to jump ahead in the buffer by the number of data

positions or pixels equal to the run length of zeroes. The 3D data image buffer then contains the 3D data to be output to the image projector 20 by SLM controller 18D to generate the two-dimensional images.

The image projector 20 has associated optics 22 for projecting the two- dimensional slices 24—30 of the 3D image at a high frame rate and in a time-sequential manner to a multiple optical element (MOE) device 32 for selective imaging to generate a first volumetric three-dimensional image 34 which appears to the viewer 12 to be present in the space of the MOE device 32. The MOE device 32 includes a plurality of optical elements 36—42 which, under the control of the screen controller 18C, selectively receive each of the slices 24—30 as displayed two-dimensional images 44—50, with one optical element receiving and displaying a respective slice during each frame rate cycle. The number of depth slices generated by the MVD controller 18 is to be equal to the number of optical elements 36—42, that is, each optical element represents a unit of depth resolution of the volumetric 3D image to be generated and displayed. The optical elements 36—42 may be liquid crystal displays composed of, for example, nematic, ferroelectric, or cholesteric materials, other polymer-free or polymer stabilized materials, such as cholesteric textures using a modified Kent State formula known in the art for such compositions .

The overall display of each of the slices 24—30 by the optical elements 36—42 of the MOE device 32, as a set of displayed images, occurs at a sufficiently high frame rate as set forth below, such as rates greater than about 35 HZ so that the human viewer 12 perceives a continuous volumetric 3D image 34, viewed directly and without a stereographic headset, and instead of the individual two-dimensional images 44—50. Accordingly, in the illustration of FIG. 1, the images 44—50 may be cross-sections of a sphere, and so the 3D image 34 thus generated which would appear as a sphere to the viewer 12 positioned in the midst of the optical elements 36—42 forming the MOE device 32.

In alternative embodiments, the images 44—50 may be generated to display an overall image having a mixed 2D and 3D appearance, such as 2D text as a caption below the sphere, or 2D text on the sphere. One application may be a graphic user interface (GUI) control pad which has both 2D and 3D image characteristics to allow the viewer 12 to view a GUI, provided by standard operating systems, such as Microsoft Windows LINUX, ANDROID, IOS with 2D screen appearances as a virtual fiat screen display, and with 3D images such as the sphere appearing on a virtual flat screen display.

The first volumetric 3D image 34 is viewable within a range of

orientations. Furthermore, light 52 from the first volumetric 3D image 34 is further processed by a real image projector 54 to generate a second volumetric 3D image 56 which appears to the viewer 12 to be substantially the same image as the first volumetric 3D image 34 floating in space at a distance from the MOE device 32. The real image projector 54, or alternatively a floating image projector, may be a set of optics and/or mirrors for collecting light 52 emitted from the MOE device 32 and for re-imaging the 3D image 34 out into free space. The real image projector 54 may be a high definition volumetric display (HDVD) which includes a conventional spherical or parabolic mirror to produce a signal viewing Zone located on an optic axis of the

MOE device 32. For example, the real image projection systems may be the apparatus described in US. Pat. Nos. 5,552,934 to Prince and 5,572,375 to Crabtree, IV, each of these patents being incorporated herein by reference. In alternative embodiments, holographic optics may be employed by the real image projector 54 with the same functions as conventional spherical or parabolic mirrors to generate a floating image 56 but with multiple viewing zones, such as one viewing zone in a center area aligned with the optic axis, and viewing zones on either side of an optical axis, so multiple 3D floating images 56 may be viewed by multiple viewers.

In an alternative embodiment, the 3D image heads up display may be created in front of observer using semi-transparent mirror system As the real image projector 54. In such application, projected

volumetric 3D image appears as floating in a space in front of a see- through background image. In other words, the spatial

3D image may be induced as augmented reality (heads-up display) .

In other alternative embodiments, the real image projector 54 may include holographic optical elements (HOEs), that is, holograms in the conventional sense which do not show a recorded image of a pre-existing object. Instead, an HOE acts as a conventional optical element such as a lens and/or mirror to receive, reflect, and re-direct incident light. Compared to conventional optical elements such as glass or plastic, HOEs are very lightweight and inexpensive to reproduce, and may also possess unique optical characteristics not available in conventional optics. For example, an HOE may produce multiple images of the same object at different angles from a predetermined optical axis, and so the field of view of a display employing a relatively small HOE may be dramatically increased without increasing the optic size as required for conventional optics. Accordingly, using at least one HOE as the real image projector 54, the MVD system 10 may be fabricated to provide a relatively compact system with a 360° field of view. In addition, for an image projector 20 incorporating laser light sources, HOEs are especially compatible for high performance with such laser light sources due to the wavelength selectivity of the HOE.

Since either of the volumetric 3D images 34, 56 appears to the viewer 12 to have volume and depth, and optionally also color, the multi-planar volumetric display system 10 may be adapted for virtual reality and haptic/tactile applications, such as the example described below for tactile animation to teach surgery. The real image projector 54 allows the floating 3D image 56 to be directly accessible for virtual

interaction. The MVD system 10 may include a user feedback device 58 for receiving hand movements from the viewer 12 corresponding to the viewer 12 attempting to manipulate either of the images 34, 56. Such hand movements may be translated by the user feedback device 58 as control signals which are conveyed to the image transformation engine (ITE) 16A to modify one or both of the images 34, 56 to appear to respond to the movements of the viewer 12. Alternatively, the user feedback device 58 may be operatively connected to the image data source 16B, which may include a 3D graphics processor, to modify one or both of the images 34, 56.

Another application of an MVD system 10 with a force feedback interface is a surgical simulator and trainer, in which the user may see and feel three-dimensional virtual anatomy, including animation such as a virtual heart beating and reacting to virtual prodding by a user, in order to obtain certification as a surgeon, to practice innovative new

procedures, or even to perform a remote surgery, for example, over the Internet using Internet communication protocols. Tactile effects may thus be combined with animation to provide real-time simulation and stimulation of users working with 3D images generated by the MVD system 10. For example, the viewer 12 may be a surgeon teaching medical students, in which the surgeon views and manipulates the first 3D image 34 in virtual reality, While the students observe the second 3D image 56 correspondingly manipulated and modified due to the real image projector 54 responding to changes in the first 3D image 34. The students then may take turns to individually manipulate the image 34, such as the image of a heart, which may even be a beating heart by imaging animation as the 3D images 34, 54. The teaching surgeon may then observe and grade students in performing image manipulation as if such images were real, such as a simulation of heart surgery.

THE MOE DEVICE

In an illustrated embodiment, the MOE device 32 is composed of a stack of single pixel liquid crystal displays (LCDs), composed of glass, as the optical elements 36—42, which are separated by either glass, plastic, liquid, or air spacers. Alternatively, the optical elements 36— 42 may be composed of plastic or other substances with various

advantages, such as lightweight construction. The glass, plastic, and/or air spacers may be combined with the glass LCDs in an optically

continuous configuration to eliminate reflections at internal

interfaces. The surfaces of the LCDs and spacers may be optically combined by either optical contact, index matching fluid, optical cement or coated with anti-reflective film layers. Alternatively, the spacers may be replaced by liquid such as water, mineral oil, or index matching fluid, with such liquids able to be circulated through an external chilling device to cool the MOE device 32. Also, such liquid-spaced MOE devices 32 may be transported and installed empty to reduce the overall Weight, and the spacing liquid may be added after installation.

In a preferred embodiment, the optical elements 36—42 are planar and rectangular, but alternatively may be curved and/or of any shape, such as cylindrical. For example, cylindrical LCD displays may be fabricated by difference techniques such as extrusion, and may be nested within each other. The spacing distance between the optical elements

36—42 may be constant, or in alternative embodiments may be variable such that the depth of the MOE device 32 may be greatly increased without increasing the number of optical elements 36—42. For example, since the eyes of the viewer 12 lose depth sensitivity with increased viewing distance, the optical elements positioned further from the viewer 12 may be spaced further apart. Logarithmic spacing may be implemented, in which the spacing between the optical elements 36—42 increased linearly with the distance from the viewer 12.

THE HIGH FRAME RATE IMAGE PROJECTOR

The maximum resolution and color depth of the three dimensional images 34, 56 generated by the MVD system 10 is directly determined by the resolution and color depth of the high frame rate image projector 20. The role of the MOE device 32 is primarily to convert the series of two dimensional images from the image projector 20 into a 3D volume image. In one embodiment, the image projector 20 includes an arc lamp light source with a short arc. The light from the lamp is separated into red, green and blue components by color separation optics, and is used to illuminate three separate spatial light modulators (SLMs) . After modulation by the SLMs, the three color channels are recombined into a single beam and projected from the optics 22, such as a focusing lens, into the MOE device 32, such that each respective two-dimensional image from the slices 24—30 is displayed on a respective one of the optical elements 36—42. In another embodiment, the image projector 20 includes high power solid state lasers instead of an arc lamp and color

separation optics. Laser light sources have a number of advantages, including, increased efficiency, a highly directional beam, and single wavelength operation. Additionally, laser light sources produce highly saturated, bright colors.

In another embodiment, the image projector 20 includes high power

LED light sources for RGB three-color channels in which light is optically combined by dichroic mirror system and illuminates a single SLM. The single SLM consequently modulates individual RGB color planes projecting them onto the selected target layer of MOE. This mode of operation provides potentially simpler and lower cost design of the image projector but requires three times greater switching speed of the SLM and consequential switching of three RGB power LEDs .

In a further embodiment, different technologies may be used to implement the SLM, provided that high speed operation is attained. For example, high speed liquid crystal devices, modulators based on micro- electromechanical (MEMs) devices, or other light modulating methods may be used to provide such high frame rate imaging. For example, the

Digital Light Processing (DLP) technology of TEXAS INSTRUMENTS, located in Dallas, Tex.; the Grating Light Valve (GLV) technology of SILICON LIGHT MACHINES, located in Sunnyvale, Calif.; and the Analog

Ferroelectric LCD devices of BOULDER NONLINEAR SYSTEMS, located in Boulder, Colo., may be used to modulate the images for output by the image projector 20. Also, the SLM may be a ferroelectric liquid crystal (FLC) device, and polarization biasing of the FLC SLM may be

implemented .

Alternatively, for spatial modulation, liquid crystal on silicon SLM devices may be used. To obtain very high resolution images in the MVD system 10, the images 44—50 must be appropriately and rapidly re-focused onto each corresponding optical element of the MOE device 32, in order to display each corresponding image on the optical element at the appropriate depth. To meet such re-focusing requirements, adaptive optics systems are used, which may be devices known in the art, such as the fast focusing apparatus described in G. Vdovin, "Fast focusing of imaging optics using micro-machined adaptive mirrors", available on the Internet. As shown in FIG. 2, a membrane light modulator (MLM) 90 has as a thin flexible membrane 92 which acts as a mirror with controllable reflective and focusing characteristics. The membrane 92 may be composed of a plastic, nitrocellulose, "MYLAR", or thin metal films under tension and coated with a conductive reflecting layer of metal coating which is reflective, such as aluminum. An electrode and/or a piezoelectric actuator 94 is positioned to be substantially adjacent to the membrane 92. The electrode 94 may be flat or substantially planar to extend in two dimensions relative to the surface of the membrane 92. The membrane 92 is mounted substantially adjacent to the electrode 94 by a mounting structure 96, such as an elliptical mounting ring, such as a circular ring .

The electrode 94 is capable of being placed at a high voltage, such as about 1,000 volts, from a voltage source 98. The voltage may be varied within a desired range to attract and/or repel the membrane 92. The membrane 92, which may be at ground potential by connection to ground 100, is thus caused by electrostatic attraction to deflect and deform into a curved shape, such as a parabolic shape. When so deformed, the membrane 92 acts as a focusing optic with a focal length and thus a projection distance which may be rapidly varied by varying the electrode voltage. For example, the curved surface of the membrane 92 may have a focal length equal to half of the radius of curvature of the curved membrane 92, with the radius of curvature being determined by the tension on the membrane 92, the mechanical properties of the material of the membrane 92, the separation of the membrane 92 and the electrode 94, and the voltage applied to the electrode 94. In one embodiment, the deflection of the membrane 94 is always toward the electrode 94.

Alternatively, by placing a window with a transparent conducting layer on the opposite side of the membrane 92 from the electrode 94, and then applying a fixed voltage to the Window, the membrane 92 may be caused to deflect in both directions; that is, either away from or toward the electrode 94, thus permitting a greater range of focusing images. Such controlled variation of such a membrane 92 in multiple directions is described, for example, in a paper by Martin Yellin in the SPIE

CONFERENCE PROCEEDINGS, VOL. 75, pp. 97-102 (1976) .

In alternative embodiment, the moving voice coil linear actuator (VCLA) principle may be used; see FIG. 4. The fixed part of an actuator consists of a magnetic flux return 152 with attached permanent magnet 158. Sliding part consists of coil holder 154 with attached tubular driving coil 160. The reflecting mirror surface 154 is attached to the coil holder and the linear movement is perpendicular to the mirror surface. The mirror surface deviation from the initial state is

proportional to the electrical current in the driving coil. The VCLA system similarly as MLM is used into rapid refocusing of image

proj ector .

The optical effects of the deflections of the MLM 90 or VCLA 161 may be magnified by the projection optics 22, and cause the projected image from an object plane to be focused at varying distances from the image projector 20 at high re-focusing rates. Additionally, the MLM 90 may maintain a nearly constant image magnification over its full focusing range .

Referring to FIG. 2, the MLM 90 may be incorporated into an adaptive optics system 102, for example, to be adjacent to a quarter waveplate 104 and a beamsplitter 106 for focusing images to the projection optics 22. Images 110 from an object or object plane 112 pass through the polarizer 108 to be horizontally polarized by the beamsplitter 106, and thence to pass through the quarter waveplate 104 to result in circularly polarized light incident on the membrane 92 for reflection and focusing. After reflection, such focused images 114 are passed back through the quarter waveplate 104 resulting in light 114 polarized at 90° to the direction of the incident light 110. The beamsplitter 106 then reflects the light 114 toward the projection optics 22 to form an image of the object. By using the quarter waveplate 104 and polarizer 108 with the

MLM 90, the adaptive optic system may be folded into a relatively compact configuration, which avoids mounting the MLM 90 off-axis and/or at a distance from the projection lens 22.

The images may be focused at a normal distance FN to a normal projection plane 116 from the projection optics 22, and the images may be refocused at a high rate between a minimum distance FMIN from a minimum projection plane 118 to a maximum distance FMAX to a maximum projection plane 120 from the projection optics 22 with high resolution of the image being maintained .

To obtain more compact physical volume for the display referring to figure 17, image projector 178 with long throw projection lens may be used with a multi-mirror relay system where mirrors 172, 174, 176 are used to project volumetric image slices into MOE 170.

Further reduction in physical volume for volumetric display projection system implementation may be achieved by using the curve-shaped mirror 182, 184 based projection system in which the high speed projector 186 projects volumetric image slices into MOE 180.

In one alternative embodiment of the image projector 20, the adaptive optics may be used in a heads-up display to product the 3D image that is not fixed in depth but instead may be moved toward or away from the viewer 12. Without using the MOE device 32, the 2D image slices 24—30 may be projected directly into the eye of the viewer 12 to appear at the correct depth. By rapidly displaying such slices 24—30 to the viewer 12, a 3D image is perceived by the viewer 12. In this embodiment of the MVD system 10, the adaptive optics of the image projector 20 and other components may be very compact to be incorporated into existing heads-up displays for helmet-mounted displays or in cockpits or dash board mounted systems in vehicles.

In another embodiment, the slices 24—30 may be generated and projected such that some of the images 44—50 are respectively displayed on more than one of optical elements 36—42, in order to oversample the depth by displaying the images over a range of depths in the MOE device 32 instead of at a single depth corresponding to a single optical element. For example, oversampling may be advantageous if the MOE device 32 has more planes of optical elements 36—42 than the number of image slices 24—30, and so the number of images 44—50 is greater than the number of image slices 24—30. For example, a slice 24 displayed on both of optical elements 36—38 as images 44—46, respectively. Such oversampling

generates the 3D image 34 with a more continuous appearance without increasing the number of optical elements 36—42 or the frame rate of the image projector 20. Such oversampling may be performed, for example, by switching multiple optical elements to be in an opaque state to receive a single projected slice during a respective multiple projection cycles onto the respectively opaque multiple optical elements.

GENERATION OF THE 3D IMAGE FROM A MULTIPLANAR DATASET

To generate the set of 2D image slices 24—30 to be displayed as a set of 2D images 44—50 to form the 3D image 34, a multi-planar dataset is generated from the 3D image data received by the MVD controller 18 from the graphics data source 16. Each of the slices 24—30 is displayed at an appropriate depth within the MOE device 32; that is, the slices 24—30 are selectively projected onto a specific one of the optical elements 36—42. If the slices 24—30 of the 3D image 34 are made close enough, the image 34 appears to be a continuous 3D image. Optional multi-planar anti-aliasing described herein may also be employed to enhance the continuous appearance of the 3D image 34.

A method of computing a multi-planar dataset (MPD) is performed by the MVD system 10. In particular, the ITE 16A performs such a method to generate a multi-planar (MPD) dataset suitable for outputting to the image projector 20. The method also includes fixed depth operation and anti-aliasing .

Referring to FIG. 3, the method responds in step 140 to interaction with the user 12 operating the MVD system 10, such as through a GUI or the optional user feedback device 58 to select and/or manipulate the images to be displayed. From such operation and/or interaction, the MVD system 10 performs image rendering in step 146 from image data stored in a frame buffer, which may be, for example, a memory of the ITE 16A. The frame buffer may include sub-buffers, such as the color buffer and the depth buffer. During a typical rendering process, a graphics computer ITE 16A computes the color and depth of each pixel in the same (x,y) position in the depth buffer. If the depth of a new pixel is less than the depth of the previously computed pixel, then the new pixel is closer to the viewer, so the color and depth of the new pixel are substituted for the color and depth of the old pixel in both of the color and depth buffers, respectively. Once all objects in a scene are rendered as a dataset for imaging, the method continues in steps 142, 144, 148—150. Alternatively or in addition, the rendered images in the frame buffer may be displayed to the viewer 12 as a 3D image on a 2D computer screen as a prelude to generation of the 3D image as a volumetric 3D image 34, thus allowing the viewer 12 to select which images to generate as the 3D image 34.

In performing the method for MPD computation, the data from the color buffer is read in step 144, and the data from the depth buffer is read in step 146. The frame buffer may have, for example, the same number of pixels in the x-dimension and the y-dimension as the desired size of the image slices 24—30, Which may be determined by the pixel dimensions of the optical elements 36—42. If the number of pixels per dimension is not identical between the frame buffer and the image slices 24—30, the data in the color and depth buffers are scaled in step 148 to have the same resolution as the MVD system 10 with the desired pixel dimensions of the image slices 24—30. The MVD controller 18 includes an output buffer in the memory for storing a final MPD generated from the data of the color and depth buffers, which may be scaled data as indicated above .

Depending on user interaction requested in step 142, ITE 16A performs linear or non-linear image transformations as well by processing multiplanar datasets (MPD) . Transformations may include but are not limited to linear translation, rotation, zooming in/out, image panning, color or intensity spatial gradient application and others. Processing may be applied on both static and real-time streaming datasets.

The output buffer stores a set of data corresponding to the 2D images, with such 2D images having the same resolution and color depth as the images 44—50 to be projected by the slices 24—30. In a preferred embodiment, the number of images 44—50 equals the number of planes formed by the optical elements 36—42 of the MOE device 32. After the MPD calculations are completed and the pixels of the 2D images are sorted in the output buffer in step 150, the output buffer is

transferred to an MVD image buffer, which may be maintained in a memory in the image projector 20, from which the 2D images are converted to image slices 24—30 to form the 3D image 34 to be viewed by the viewer 12, as described above. The method then loops back to step 140, for example, concurrently With generation of the 3D image 34, to process new inputs and thence to update or change the 3D image 34 to generate, for example, animated 3D images.

The MVD system 10 may operate in two modes: variable depth mode and fixed depth mode. In variable depth mode, the depth buffer is tested prior to the MPD computations including step 146, in order to determine a maximum depth value Z MA X and the minimum depth value Z M IN, which may correspond to the extreme depth values of the 3D image on a separate 2D screen prior to 3D volumetric imaging by the MVD system 10. In the fixed depth mode, the Z MA X and Z M IN are assigned values by the viewer 12, either interactively or during application startup to indicate the rear and front bounds, respectively, of the 3D image 34 generated by the MVD system 10. Variable depth mode allows all of the objects visible on the 2D screen to be displayed in the MOE device 32 regardless of the range of depths or of changes in image depth due to interactive manipulations of a scene having such objects.

In the fixed depth mode, objects which may be visible on the 2D screen may not be visible in the MOE device 32 since such objects may be outside of a virtual depth range of the MOE device 32. In an alternative embodiment of the fixed depth mode, image pixels which may be determined to lie beyond the "back" or rearmost optical element of the MOE device 32, relative to the viewer 12, may instead be displayed on the rearmost optical element. For example, from the perspective of the viewer 12 in FIG. 1, the optical element 36 is the rearmost optical element upon which distant images may be projected. In this manner, the entire scene of objects remains visible, but only objects with depths between Z MA X and ZMIN are visible in the volumetric 3D image generated by the MOE device 32.

In the MPD method described herein, using the values of ZMAX and ZMIN, the depth values within the depth buffer may be offset and scaled in step 148 so that a pixel with a depth of ZMIN has a scaled depth of 0, and a pixel with a depth of Z M AX has a scaled depth equal to the number of planes of optical elements 36—42 of the MOE device 32. In step 150, such pixels with scaled depths are then sorted and stored in the output buffer by testing the integer portion [ d, ] of the scaled depth values d, , and by assigning a color value from the color buffer to the appropriate MPD slices 24—30 at the same (x,y) coordinates.

Using the disclosed MPD method, the volumetric 3D images 34 generated by the MVD system 10 may be incomplete; that is, objects or portions thereof are completely eliminated if such objects or portions are not visible from the point of view of a viewer viewing the corresponding 3D image on a 2D computer screen. In a volumetric display generated by the MVD system 10, image look around is provided allowing a viewer 12 in FIG. 1 to move to an angle of view such that previously hidden objects become visible, and so such MVD systems 10 are advantageous over existing 2D displays of 3D images.

In alternative embodiments, the MPD method may implement anti-aliasing, as described herein, by using the fractional portion of the scaled depth value; that is, d, - [ d, ] , to assign such a fraction of the color value of the pixels to two adjacent MVD image slices in the set of slices 24—30. For example, if a scaled depth value is 5.5 and each slice corresponds to a discrete depth value, half of the brightness of the pixel is assigned to each of slice 5 and slice 6. Alternatively, if the scaled depth is 5.25, 75% of the color value is assigned to slice 5 because slice 5 is "closer" the scaled depth, and 25% of the color value is assigned to slice 6.

Different degrees of anti-aliasing may be appropriate to different visualization tasks. The degree of anti-aliasing may be varied from one extreme; that is, ignoring the fractional depth value to assign the color value, to another extreme of using all of the fractional depth value, or the degree of anti-aliasing may be varied to any value between such extremes. Such variable anti-aliasing may be performed by

multiplying the fractional portion of the scaled depth by an antialiasing parameter, and then negatively offsetting the resulting value by half of the anti-aliasing parameter. The final color value may be determined by fixing or clamping the negatively offset value to be within a predetermined range, such as between 0 and 1. An anti-aliasing parameter of 1 corresponds to full anti-aliasing, and an anti-aliasing parameter of infinity, °°, corresponds to no anti-aliasing. Anti ¬ aliasing parameters less than 1 may also be implemented.

In scaling the depth buffer values, a perspective projection may be used, as specified in the Open Graphics Library (OpenGL) multi-platform software interface to graphics hardware supporting rendering and imaging operations. Such a perspective projection may result in a non-linearity of values in the depth buffer. For an accurate relationship between the virtual depth and the visual depth of the 3D image 34, the MVD

controller 18 takes such non-linearity into account in producing the scaled depth in step 148. Alternatively, an orthographic projection may be used to scale the depth buffer values in step 148.

In existing 2D monitors, perspective is generated computationally in the visualization of 3D data to create a sense of depth such that objects further from the viewer appear smaller, and parallel lines appear to converge. In the disclosed MVD system 10, the 3D image 34 is generated with a computational perspective to create the aforesaid sense of depth, and so the depth of the 3D image 34 is enhanced.

In another embodiment, the slices 24—30 may be generated and projected such that some of the images 44—50 are respectively displayed on more than one of optical elements 36—42, in order to oversample the depth by displaying the images over a range of depths in the MOE device 32 instead of at a single depth corresponding to a single optical element. For example, oversampling may be advantageous if the MOE device 32 has more planes of optical elements 36—42 than the number of image slices 24—30, and so the number of images 44—50 is greater than the number of image slices 24—30. For example, a slice 24 displayed on both of optical elements 36—38 as images 44—46, respectively. Such oversampling

generates the 3D image 34 with a more continuous appearance without increasing the number of optical elements 36—42 or the frame rate of the image projector 20. Such oversampling may be performed, for example, by switching multiple optical elements to be in an opaque state to receive a single projected slice during a respective multiple projection cycles onto the respectively opaque multiple optical elements.

ALTERNATIVE EMBODIMENTS OF THE MVD SYSTEM

In one alternative embodiment, the MOE device 32 includes 10 liquid crystal panels 36—42 and is dimensioned to be 5.5 inches (14 cm.) long by 5.25 inches (13.3 cm.) wide by 2 inches (4.8 cm.) in depth. The image projector 20 includes an acousto-optical laser beam scanner using a pair of ion lasers to produce red, green and blue light, which was modulated and then scanned by high frequency sound waves. The laser scanner is capable of vector scanning 166,000 points per second at a resolution of 200x200 points. When combined with the 10 plane MOE device 32 operating at 40 HZ, the MVD system 10 produces 3D images with a total of 400,000 voxels, that is, 3D picture elements. A color depth of 24-bit RGB resolution is obtained, with an image update rate of 1 Hz. using a real image projector 54, a field of view of 100°x 45° can be attained.

In another alternative embodiment, the MOE device 32 includes 12 liquid crystal panels 36—42 and is dimensioned to be 6 inches (15.2 cm.) long by 6 inches (15.2 cm.) wide by 3 inches (7.7 cm.) in depth. The image projector 20 includes a pair of TEXAS INSTRUMENTS DLP video projectors, designed to operate in field-sequential color mode to produce grayscale images at a frame rate of 180 Hz. By interlacing the two projectors, an effectively single projector is formed with a frame rate of 360 Hz, to produce 12 plane volumetric images at a rate of 30 Hz. The transverse resolution attainable is 640 x 480 points. When combined with the 12 plane MOE device 32 operating at 30 Hz, the MVD system 10 produces gray 3D images with a total of 3,686,400 voxels. A color depth of 8-bit grayscale resolution is obtained, with an image update rate of 10 Hz. Using a real image projector 54, a field of view of 100°x 45°can be attained.

In a further alternative embodiment, the MOE device 32 includes 50 liquid crystal panels 36—42 and is dimensioned to be 15 inches (38.1 cm.) long by 13 inches (33.0 cm.) Wide by 10 inches (25.4 cm.) in depth. The image projector 20 includes a high speed analog ferroelectric LCD available from BOULDER NONLINEAR SYSTEMS (Meadowlark Optics) which is extremely fast with a frame rate of about 10 kHz. The transverse resolution attainable is 512 x 512 points. When combined With the 50 plane MOE device 32 operating at 40 Hz, the MVD system 10 produces 3D images with a total of 13,107,200 voxels. A color depth of 24-bit RGB resolution is obtained, with an image update rate of 10 Hz. By using a real image projector 54, a field of view of 100°x 45° can be attained. With such resolutions and a volume rate of 40 Hz non-interfaced, the MVD system 10 has a display capability equivalent to a conventional monitor with a 20 inch (50.8 cm.) diagonal.

In another embodiment, the optical elements 36—42 may have a transverse resolution of 1280 x 1024 and a depth resolution of 256 planes. The system will potentially operate in a depth interlaced mode in which alternate planes are written at a total rate of 75 Hz, with the complete volume updated at a rate of 37.5 Hz. Such interlacing provides a higher perceived volume rate without having to increase the frame rate of the image projector 20.

In a further embodiment, the MOE device 32 includes 500 planes for a significantly large depth resolution, and a transverse resolution of 2048 x 2048 pixels, which results in a voxel count greater than 2 billion voxels. The size of the MOE device 32 in this configuration is 33 inches (84 cm.) long by 25 inches (64 cm.) wide by 25 inches (64 cm.) in depth, which is equivalent to a conventional display with a 41 inch (104 cm.) diagonal. The image projector 20 in this embodiment includes the Grating Light Valve technology of SILICON LIGHT MACHINES, to provide a frame rate of 20 kHz.

VIRTUAL INTERACTION APPLICATIONS

Alternative embodiments of the MVD system 10 incorporating the user feedback device 58 as a force feedback interface allow the viewer 12 to perceive and experience touching and feeling the 3D images 34, 56 at the same location Where the 3D images 34, 56 appear. The MVD system 10 can generate high resolution 3D images 34, 56, and so virtual interaction is implemented in the MVD system 10 using appropriate force feedback apparatus to generate high resolution surface textures and very hard surfaces, that is, surfaces which appear to resist and/or to have low compliance in view of the virtual reality movements of portions of the surfaces by the viewer 12. Accordingly, the user feedback device 58 includes high resolution position encoders and a high frequency feedback loop to match the movements of the hands of the viewer 12 with modifications to the 3D images 34, 56 as Well as force feedback sensations on the viewer 12. Preferably, the user feedback device 58 includes lightweight and compact virtual reality components, such as force-feedback-inducing gloves, in order that the reduced mass and bulk and the associated weight and inertia of the components impede the motions of the viewer 12 at a minimum .

Such user feedback devices may include lightweight carbon composites to dramatically reduce the Weight of any wearable components worn by the viewer 12. Furthermore, very compact and much higher resolution fiberoptic or capacitive-position encoders may be used instead of bulky optical position encoders know in the art to determine the position of portions of the viewer 12 such as hands and head orientations.

The wearable component on the viewer 12 include embedded processor systems to control the user feedback device 58, thus relieving the processing overhead of the MVD controller 18 and/or the interface 14. By using an embedded processor whose only task is to run the interface, the feedback rate for the overall MVD system 10 may be greater than 100 kHz. When combined with very high resolution encoders, the MVD system has a dramatically high fidelity force feedback interface.

Using such virtual interaction technologies with the MVD system 10 which is capable of displaying such volumetric 3D images 34, 56, a 3D GUI is implemented to allow a viewer 12 to access and directly manipulate 3D data. Known interface devices such as the data glove, video gesture recognition devices, and a FISH SENSOR system available from the MIT MEDIA LAB of Cambridge, Mass., can be used to allow a user to directly manipulate 3D data, for example, in 3D graphics and computer aided design (CAD) systems.

For such 3D image and data manipulation, the MVD system 10 may also incorporate a 3D mouse device, such as the SPACE BALL available from Spacetec Inc. of Lowell, Mass., as well as a 3D pointing device which moves a 3D cursor anywhere in the display volume around the image 34 in the same manner as a viewer 12 moves one's hand in true space.

Alternatively, the MVD system 10, through the user feedback device 58, may interpret movement of the hand of the viewer 12 as the 3D cursor.

In one embodiment, the user feedback device 58 may include components for sensing the position and orientation of the hand of the viewer 12. For example, the viewer 12 may hold or wear a position sensor such as a magnetic sensor available from POLYHEMUS, INC., and/or other types of sensors such as positional sensors incorporated in virtual reality data gloves. Alternatively, the position of the hand is sensed within the volume of the display of the 3D image 34 through the use of computer image processing, or a radiofrequency sensor such as sensors developed at the MIT MEDIA LAB. To avoid muscle fatigue, the user feedback device 58 may sense the movement of a hand or finger of the viewer 12 in a much smaller sensing space that is physically separate from the displayed 3D image 34, in a manner similar to 2D movement of a conventional 2D mouse on the fiat surface of a desktop to control the position of a 2D cursor on a 2D screen of a personal computer.

ADVANTAGES OF THE MVD SYSTEM

Using the MVD system 10, the 3D images 34, 56 are generated to provide for natural viewing by the viewer 12, that is, the 3D images 34, 56 have substantially all of the depth cues associated with viewing a real object, which minimizes eye strain and allows viewing for extended time periods without fatigue.

The MVD system 10 provides a high resolution/voxel count, with the MOE device 32 providing voxel counts greater than, for example, 3,000,000, which is at least one order of magnitude over many volumetric displays known in the art. In addition, by preferably using a rectilinear geometry for displaying the 3D image 34, such as an MOE device 32 having a rectangular cross-section adapted to displaying image slices 24—30 as 2D images 44—50, the MVD system 10 uses a coordinate system which matches internal coordinate systems of many known graphics computers and graphical applications program, which facilitates and maximizes computer performance and display update rate without requiring additional conversion software. Additionally, in a preferred embodiment, the image voxels of the MOE 32 have identical and constant shapes, sizes, and orientations, which thus eliminates image distortion in the 3D image 34.

Unlike multi-view autostereoscopic displays known in the art, the MVD system 10 provides a wide field of view with both horizontal and vertical parallax, which allows the 3D image to be "looked around" by the view in multiple dimensions instead of only one. In addition, unlike multi-view autostereoscopic displays, the field of view of the MVD system 10 is continuous in all directions, that is, there are no disconcerting jumps in the 3D image 34 as the viewer 12 moves with respect to the MOE device 32.

Further, due to the static construction of the optical elements 36—42 in the MOE device 32, there are no moving parts which, upon a loss of balance of the entire MOE device 32, results in image distortions, display vibrations, and even catastrophic mechanical failure of the MOE device 32.

The MVD system 10 may also avoid occlusion, that is, the obstruction by foreground objects of light emitted by background objects. A limited form of occlusion, called computational occlusion, may be produced by picking a particular point of view, and then simply not drawing surfaces that cannot be seen from that point of view, in order to improve the rate of image construction and display. In one embodiment, the MVD system 10 compensates for the lack of occlusion by interspersing scattering optical element displaying an image with other optical elements in a scattering state to create occlusion by absorbing

background light. Guest-host polymer-dispersed liquid crystals may be used in the optical elements 36—42, in which a dye is mixed with the liquid crystal molecules, allowing the color of the material to change with applied voltage.

The MVD system 10 also has little to no contrast degradation due to ambient illumination of the MVD system 10, since the use of the real image projector 54 requires a housing extending to the MOE device 32, which in turn reduces the amount of ambient light reaching the MOE device 32, and thereby prevent contrast degradation.

Alternatively, contrast degradation may be reduced by increasing the illumination from the image projector 20 in proportion to the ambient illumination, and by installing an absorbing plastic enclosure around the MOE device 32 to reduce the image brightness to viewable levels. The ambient light must pass through the absorbing enclosure twice to reach the viewer 12 — once on the way in and again after scattering off the optical elements 36—42 of the MOE device 32. On the contrary, the light from the image projector 20 which forms the images 44—50 only passes through the absorbing enclosure on the way to the viewer 12, and so had a reduced loss of illumination, which is a function of the square root of the loss suffered by the ambient light.

An alternative embodiment reduces the effects of ambient light is to use an enclosure with three narrow spectral bandpasses in the red, green and blue, and a high absorption for out-of-band light, which is highly effective to reduce such ambient light effects. Greater performance in view of ambient light is obtained by using laser light sources in the image projector 20, since the narrowband light from laser light sources passes unattenuated after scattering from the MOE device 32, while the broadband light from the ambient illumination is mostly absorbed.

By the foregoing a novel and unobvious multi-planar volumetric display system 10 and method of operation has been disclosed by Way of the preferred embodiment. However, numerous modifications and substitutions may be had without departing from the spirit of the invention. For example, while the preferred embodiment discusses using planar optical elements such as fiat panel liquid crystal displays, it is wholly within the preview of the invention to contemplate curved optical elements in the manner as set forth above.

The MVD system 10 may be implemented using the apparatus and methods described in co-pending U.S. provisional patent application Ser. No. 60/082,442, filed Apr. 20, 1998, as well as using the apparatus and methods described in co-pending U.S. patent application Ser. No.

08/743,483, filed Nov. 4, 1996, Which is a continuation-in-part of US. Pat. No. 5,572,375; which is a division of US. Pat. No. 5,090,789. The MVD system 10 may also be implemented using the apparatus and methods described in co-pending US. application Ser. No. 09/004,722, filed Jan. 8, 1998. Each of the above provisional and non-provisional patent applications and issued patents, respectively, are incorporated herein by reference. Accordingly, the invention has been described by way of illustration rather than limitation.