Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VIRTUAL HOLOGRAPHIC DISPLAY SYSTEM
Document Type and Number:
WIPO Patent Application WO/2017/124168
Kind Code:
A1
Abstract:
A system for displaying a virtual holographic image of a digitized object comprises a computing component and a display component. The computing component comprises a processor and a memory having encoded thereon program code for a rendering module that is executable by the processor to: render a digital 3D object model; capture multiple 2D images of the 3D object model using multiple virtual cameras positioned around the 3D object model; arrange the multiple 2D images into a composite 2D image; and output a video signal comprising the composite 2D image. The display component is communicative with the computing component to receive the video signal, and comprises a display for displaying the composite 2D image and multiple partially reflective and refractive display surfaces; each display surface is angled relative to the display and is positioned to reflect one of the multiple 2D images towards a viewing position. As a result of Pepper's Ghost effect, a simulated or virtual holographic image of the digital 3D object model is displayed.

Inventors:
ADHIA DHRUV (CA)
LI YAMIN (CA)
YANG VINCENT (CA)
Application Number:
PCT/CA2015/050437
Publication Date:
July 27, 2017
Filing Date:
May 13, 2015
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
H PLUS TECH LTD (CA)
International Classes:
G02B30/56; H04N13/04; H04N5/247; H04N21/6587; H04N21/81
Foreign References:
US20050088515A12005-04-28
US20110157667A12011-06-30
US7113183B12006-09-26
US20110216002A12011-09-08
Other References:
ABOOKASIS ET AL.: "Computer-generated holograms of three-dimensional objects synthesized from their multiple angular viewpoints", JOURNAL OF THE OPTICAL SOCIETY OF AMERI, vol. 20, no. 8, August 2003 (2003-08-01), XP007905028
Attorney, Agent or Firm:
LEE , Brian et al. (CA)
Download PDF:
Claims:
Claims

What is claimed is:

1 . A system for displaying a virtual holographic image, comprising

(a) a computer readable medium for a computing component , the computer readable medium having encoded thereon program code for a rendering module and a position tracking module, wherein the rendering module is executable by a processor of the computing component to:

(i) render a digital 3D object model;

(ii) capture multiple 2D images of the 3D object model using multiple virtual cameras positioned around the 3D object model;

(iii) arrange the multiple 2D images into a composite 2D image; and

(iv) output a video signal comprising the composite 2D image; and wherein the position tracking module is executable by the processor to:

(i) communicate with a magnetometer or gyroscope of the computing component and with a magnetometer or gyroscope of a handheld device in communication with the computing component;

(ii) determine a relative heading value from readings taken by the respective magnetometer or gyroscope of the computing component and handheld device, and

(iii) rotate the rendered 3D object model relative to the multiple virtual cameras when the relative heading value is non-zero, thereby causing the multiple virtual cameras to capture different 2D views of the 3D object model; and

(b) a display component communicative with the computing component to receive the video signal, and comprising a display for displaying the composite 2D image and multiple partially reflective and refractive display surfaces each angled relative to the display and each positioned to reflect one of the multiple 2D images towards a viewing position.

The system as claimed in claim 1 wherein the multiple 2D images comprise a front view, back view, left side view and a right side view of the 3D object model, and the display component comprises four partially reflective and refractive display surfaces each positioned to reflect one of the front view, back view, left side view, and right side view of the 3D object model.

The system as claimed in claim 2 wherein the display surfaces are triangular and are arranged to form a pyramid-shaped display structure.

The system as claimed in claim 3 wherein the rendering module further comprises program code executable by the processor to render a virtual stage and to place the 3D object model on the virtual stage, wherein the virtual stage corresponds to the dimensions of the pyramid-shaped display structure.

The system as claimed in claim 4 wherein the rendering module further comprises shader program code executable by the processor to apply texture to each of the multiple 2D views.

The system as claimed in claim 5 wherein the rendering module further comprises masking program code executable by the processor to: create a mask for each 2D image wherein each mask has a masked portion and an unmasked portion; place each 2D image onto the unmasked portion of a corresponding mask to create a masked 2D image; and arrange the multiple 2D images into a composite 2D view such that the unmasked portions of the masked 2D images do not overlap, and at least some of the masked portions of the masked 2D images overlap.

7. The system as claimed in claim 6 wherein the rendering module further comprises program code executable by the processor to capture an orthographic overhead image of the composite 2D view using a virtual orthographic overhead camera positioned above the composite 2D view, and to invert the captured orthographic overhead image to produce the composite 2D image.

8. The system as claimed in claim 1 wherein the position tracking module further comprises a dynamic perspective correction program code executable to rotate the 3D object model relative to the multiple virtual cameras and in a direction opposite to a direction of rotation of the handheld device when the relative heading value is non-zero, thereby causing the multiple virtual cameras to capture different 2D views of the 3D object model.

9. The system as claimed in claim 1 wherein the system further comprises an

external human-machine interface (HMI) input device, and a client computing device communicative with the HMI input device and the computing component and comprising a processor and a memory having encoded thereon program code for a multi-modal interaction with holographic object module that is executable by the client computing device processor to:

(a) read input data from the input device,

(b) associate the input data with a command stored in a library on the client computing device memory; and

(c) transmit the command to the computing component; wherein the command relates to an operation performed by the rendering module.

10. A computer-implemented method for displaying a virtual holographic image, comprising:

(a) rendering a digital 3D object model; (b) communicating with a magnetometer or gyroscope of a computing component and with a magnetometer or gyroscope of a handheld device in communication with the computing component;

(c) determining a relative heading value from readings taken by the respective magnetometer or gyroscope of the computing component and handheld device,

(d) rotating the rendered 3D object model when the relative heading value is non-zero;

(e) capturing multiple 2D images of the 3D object model using multiple virtual cameras positioned around the 3D object model;

(f) arranging the multiple 2D images into a composite 2D image;

(g) outputting a video signal comprising the composite 2D image; and

(h) displaying the composite 2D image against multiple partially reflective and refractive display surfaces, wherein each display surface is angled relative to the display and positioned to reflect one of the multiple 2D images towards a viewing position, and is in a fixed position relative to the gyroscope or magnetometer of the computing component.

The method as claimed in claim 10 wherein the multiple 2D images comprise a front view, back view, left side view and a right side view of the 3D object model, and the four partially reflective and refractive display surfaces are each positioned to reflect one of the front view, back view, left side view, and right side view of the 3D object model.

The method as claimed in claim 1 1 wherein the display surfaces are triangular and are arranged to form a pyramid-shaped display structure.

The method as claimed in claim 12 further comprising rendering a virtual stage and placing the 3D object model on the virtual stage, wherein the virtual stage corresponds to the dimensions of the pyramid-shaped display structure.

14. The method as claimed in claim 13 further comprising applying texture to each of the multiple 2D views.

15. The method as claimed in claim 14 further comprising creating a mask for each 2D image wherein each mask has a masked portion and an unmasked portion; placing each 2D image onto the unmasked portion of a corresponding mask to create a masked 2D image; and arranging the multiple 2D images into a composite 2D view such that the unmasked portions of the masked 2D images do not overlap, and at least some of the masked portions of the masked 2D images overlap.

16. The method as claimed in claim 15 further comprises capturing an orthographic overhead image of the composite 2D view using a virtual orthographic overhead camera positioned above the composite 2D view, and inverting the captured orthographic overhead image to produce the composite 2D image.

17. The method as claimed in claim 10 further comprises determining a relative heading value between a handheld device and the display surfaces, and rotating the 3D object model relative to the multiple virtual cameras when the relative heading value is non-zero, thereby causing the multiple virtual cameras to capture different 2D views of the 3D object model.

18. The method as claimed in claim 17 further comprising rotating the 3D object model relative to the multiple virtual cameras and in a direction opposite to a direction of rotation of the handheld device when the relative heading value is non-zero, thereby causing the multiple virtual cameras to capture different 2D views of the 3D object model

Description:
Virtual Holographic Display System

Field

This disclosure relates generally to a virtual holographic display system and method. Background Pepper's Ghost effect is an illusion technique that has been used for over a hundred years, in a wide variety of applications including theatre, amusement park rides and magic tricks. The technique involves a plate of glass (or Plexiglas or plastic film) placed between a "main room" and a "blue room" at an angle that reflects the view of the blue room towards a viewer looking at the main room. When the main room is brightly lit and the blue room is darkened, the reflected image cannot be seen. When the lighting in the blue room is increased, often with the main room lights dimming to make the effect more pronounced, the reflection becomes visible and the objects within the blue room seem to appear in thin air.

The blue room may be an identical mirror-image of the main room, so that its reflected image matches the main room; this approach is useful in making objects seem to appear or disappear. This illusion can also be used to make one object or person reflected in the mirror-image appear to morph into another behind the glass (or vice versa). The blue room may instead be painted black, with only light-colored objects in it; when light is cast on the blue room, only the light objects reflect the light and these objects appear as ghostly translucent images superimposed in the main room. This can be used to make objects appear to float in space.

Modern implementations of Pepper's Ghost effect have been found in teleprompters that reflect a speech or script into a line of sight of a speaker, and as part of special effects used in stage plays and concerts. However, no satisfactory computer- implemented application of Pepper's Ghost Effect has been used to create a virtual holographic display of digitized images which could be useful in a number of different applications such as interactive play, teaching, advertising, and communication. Summary

According to one aspect of the invention, there is provided a system and a method for displaying a virtual holographic image of a digitized object. The system comprises a computing component and a display component. The computing component comprises a processor and a memory having encoded thereon program code for a rendering module that is executable by the processor to: render a digital 3D object model; capture multiple 2D images of the 3D object model using multiple virtual cameras positioned around the 3D object model; arrange the multiple 2D images into a composite 2D image; and output a video signal comprising the composite 2D image. The display component is communicative with the computing component to receive the video signal, and comprises a display for displaying the composite 2D image and multiple partially reflective and refractive display surfaces; each display surface is angled relative to the display and is positioned to reflect one of the multiple 2D images towards a viewing position. As a result of Pepper's Ghost effect, a simulated or virtual holographic image of the digital 3D object model is displayed.

The multiple 2D images can comprise a front view, back view, left side view and a right side view of the 3D object model. In such case, the display component comprises four partially reflective and refractive display surfaces that are each positioned to reflect one of the front view, back view, left side view, and right side view of the 3D object model to respective viewing positions. More particularly, the display surfaces can each be triangular and are arranged to form a pyramid-shaped display structure.

The rendering module can further comprise program code executable by the processor to render a virtual stage and to place the 3D object model on the virtual stage. The virtual stage corresponds to the dimensions of the pyramid-shaped display structure. The rendering module can further comprises shader program code executable by the processor to apply texture to each of the multiple 2D views. Further, the rendering module can further comprise masking program code executable by the processor to: create a mask for each 2D image wherein each mask has a masked portion and an unmasked portion; place each 2D image onto the unmasked portion of a corresponding mask to create a masked 2D image; and arrange the multiple 2D images into a composite 2D view such that the unmasked portions of the masked 2D images do not overlap, and at least some of the masked portions of the masked 2D images overlap. The rendering module can further comprise program code executable by the processor to capture an orthographic overhead image of the composite 2D view using a virtual orthographic overhead camera positioned above the composite 2D view, and to invert the captured orthographic overhead image to produce the composite 2D image.

The computing component can further comprise at least one of a magnetometer and a gyroscope, in which case the system further comprises a handheld device that is communicative with the computing component and which has a processor, a memory and at least one of a magnetometer and a gyroscope. The memory of the handheld device or the computing component comprises program code for a position tracking module that is executable to: calculate a relative heading value from magnetometer or gyroscope readings from the computing component and handheld device; and rotate the 3D object model relative to the multiple virtual cameras when the relative heading value is non-zero, thereby causing the multiple virtual cameras to capture different 2D views of the 3D object model. The position tracking module can further comprise a dynamic perspective correction program code executable to rotate the 3D object model relative to the multiple virtual cameras and in a direction opposite to a direction of rotation of the handheld device when the relative heading value is non-zero, thereby causing the multiple virtual cameras to capture different 2D views of the 3D object model.

The system can further comprise an external human-machine interface (HMI) input device, and a client computing device communicative with the HMI input device and the computing component. The client computing device comprises a processor and a memory having encoded thereon program code for a multi-modal interaction with holographic object module that is executable by the client computing device processor to: read input data from the input device, associate the input data with a command stored in a library on the client computing device memory; and transmit the command to the computing component, wherein the command relates to an operation performed by the rendering module.

The aforementioned method is a computer-implemented method for displaying a virtual holographic image, and comprises: rendering a digital 3D object model; capturing multiple 2D images of the 3D object model using multiple virtual cameras positioned around the 3D object model; arranging the multiple 2D images into a composite 2D image; outputting a video signal comprising the composite 2D image; and displaying the composite 2D image against multiple partially reflective and refractive display surfaces, wherein each display surface is angled relative to the display and positioned to reflect one of the multiple 2D images towards a viewing position.

Brief Description of Figures

Figure 1 is a schematic block diagram of a virtual holographic display system

comprising a display apparatus, a computing device, and a human machine interface (HMI) device according to one embodiment. Figure 2 is a top perspective view of the display apparatus according to one

embodiment.

Figure 3 is a flowchart illustrating steps performed by a rendering module of a holographic display program stored on a memory of the computing device according to one embodiment. Figure 4 is a perspective view of a 3D digitized model of an object to be displayed by the system.

Figure 5 is an illustration of a step of a mask for a 2D view of the object used by the rendering module.

Figure 6 is an illustration of masked 2D front, back, left side and right side views of the object. Figure 7 is an illustration of the masked 2D views assembled on a projection plane by the rendering module.

Figure 8 is a perspective view of an overhead virtual camera positioned over the projection plane to capture an overhead orthographical view of the projection plane. Figure 9 is a view of an image output by a projection panel of the display device.

Figure 10 is a top perspective view of a display structure of the display apparatus, for displaying the image output by the projection panel.

Figure 11 is a schematic block diagram of a networking structure of the system.

Figure 12 is a flowchart illustrating steps performed by a position tracking module of the holographic display program according to another embodiment.

Figures 13(a) and 13(b) are schematic illustrations of a 2D view of the object without and with a dynamic perspective correction applied thereto.

Figures 14(a) and (b) are flowcharts illustrating steps performed by two different embodiments of a multi-modal interaction with holographic object module of the holographic display program.

Detailed Description of Embodiments of the Invention

Referring to Figure 1 , embodiments of a virtual holographic display system 2 are described herein, and relate generally to a system for displaying two dimensional (2D) views of a three dimensional (3D) digital model of an object on multiple display surfaces using Pepper's Ghost effect, thereby creating a simulated or "virtual" hologram of the object. The system 2 comprises a display component 4 and a primary computing component 6 communicative with the display component 4, and optionally one or more human-machine input interface (HMI) input devices 8(a), 8(b) communicative with the primary computing component 6 either directly or via a client communication device 9. The display component 4 can be a display apparatus comprising a four-sided, pyramid- shaped display structure 10 and a projection panel 1 1 facing the top of the display structure 10 and operable to display images on each of the four sides of the display structure 10. The computing component 6 comprises a processor and a memory having encoded thereon program code including a rendering module that converts the 3D model of the object into four 2D views of that object, creates a build comprising the 2D views arranged on a projection plane, and exports the build into an application program executable by the processor. In some embodiments (not shown), the computing component can be integrated with the display component in a single housing. In other embodiments, the display and computing components 4, 6 are separate, and the computing component 6 can be a tablet computing device ("tablet") communicatively connected to the display apparatus 4. When the tablet computing device 6 executes the application program, the projection plane is sent to the projection panel 1 1 , which in turn projects the four 2D views onto the four surfaces of the display structure 10, thereby creating the virtual hologram of that object ("holographic object").

Some embodiments include a handheld HMI input device 8(a) that comprises a magnetometer in the input device 8(a) and a computing device 6 that also comprises a magnetometer. In these embodiments, the program code further comprises a position tracking module that causes the input device to control the orientation of the holographic object, by using the magnetometers in the input device 8(a) and the computing device 6 to determine the orientation of the input device relative to the computing device. The position tracking module can also include a dynamic perspective correction algorithm which causes the computing device 6 to rotate the views of the holographic object in an opposite direction of a rotation of the input device 8(a), thereby creating a visual effect to a user holding the input device 8(a) and moving around the display apparatus 20, that the user is also moving around the holographic object. Some embodiments include an HMI input device 8(b) that can be used to control certain aspects of the holographic object display; these input devices 8(b) include a camera- based gesture input device (e.g. KinectTM, LeapMotionTM), microphone-based voice input device, and an Emotiv brain-input sensing devices. In these embodiments, the program code further comprises a multi-modal interaction with holographic object module that receives and processes input signals from the input device(s) and controls the display of the holographic object based on these input signals.

Referring now to Figure 2, one embodiment of the display apparatus 4 comprises a frame 12 having a base 14 and a top 16 interconnected by four legs at the corners of the base 14 and top 16. The pyramid-shaped display structure 10 is mounted to the base 14 inside the frame 12, and has four display surfaces 22. The projection panel 1 1 is mounted in the top 16 and faces down towards the top end of the display structure 10. Input ports 26 comprising a data input port (e.g. Apple Lightning™ port) and a video input port (e.g. HDMI) are mounted on the frame 12 and communicative with the projection panel 1 1 . The display structure 10 (see Figure 10) comprises a front face 22(a), an opposed back face 22(b), and two opposed side faces (namely a left side face 22(c) and right side face 22(d)) extending between the front and back faces 22(a), 22(b). Each face 22(a)-(d) is tapered and narrower at its top end than at its bottom end. In the embodiment shown in Figure 2, all of the faces are triangular and have the same dimensions, thereby forming a square base. The faces of the display structure 10 comprise a transparent or semi-transparent material, for example glass, polycarbonate glass, Plexiglas™, or other types of transparent or semi-transparent thermoplastics. A semi-transparent film may be laid on the faces of the display structure. The semi-transparent film or semi-transparent material of the faces may be chosen for its ability to allow partial passage of white light therethrough whilst some of the white light is absorbed which may enhance the brightness of an image displayed on the display structure 10. In other words, the faces 22(a)-(d) are both partially reflective and partially refractive. In some embodiments up to 95% of the white light projected onto the display structure 10 may be absorbed by the semi-transparent film or semi-transparent material. In one exemplary embodiment, the display structure comprises coated polycarbonate glass with a refractivity between 28- 35% and reflection rate between 65-72%.

The projection panel 1 1 is an LED backlit LCD display monitor that has a square aspect ratio to conform to the square base of the display structure 10. Alternatively, other display monitors using different display technologies and aspect ratios can be used and are commercially available in the art. The projection panel 1 1 is communicative with the input ports 26 by data cables (not shown).

The embodiment of the display apparatus 4 shown in Figure 2 does not include a processor and memory, and relies on an external computing device like the tablet computing device 6 to generate a holographic image for display by the display apparatus 4. In an alternative embodiment (not shown), the display apparatus 4 can be provided with a processor and a memory that contains the program code including the rendering module, position tracking module, and multi-modal interaction with

holographic object module. Referring now to Figures 3 to 10, the rendering module comprises steps and instructions executable by the processor of the tablet to cause a 3D digitized model of an object ("3D object model") stored in the memory of the tablet 6 to be projected onto the display surfaces 22 of the display apparatus 4. Figure 3 show steps of a method that is performed by the processor when the program code is executed. The rendering module first renders the 3D object model using a 3D engine, such as Unity™, Unreal™ or other commercially available 3D gaming engines (step 100). The rendering module also renders a virtual stage using the 3D engine, wherein the virtual stage corresponds to the dimensions of the pyramidal display structure (step 105). Then, the 3D object model is placed at the centre of the virtual stage (step 1 10). Then, the rendering module uses four virtual cameras to capture 2D front, rear, left and right side views of the 3D object model (step 1 15); virtual cameras provided by the software development kit of the 3D engine can be used. Then, a shader operation is applied to the 2D views to provide texture to the 2D views (step 120). Then, each textured image of the object is placed onto respective rectangular 2D planes (step 125). As shown in Figure 4, the Unity™ game engine is used to render the 3D object model and virtual stage and is used to provide front, back left side and right side virtual camera. The virtual cameras are perspective cameras to render 2D views of the 3D object model with their perspectives intact. The Unity™ game engine also provides a shader tool to execute the shader operation as well as the means for placing the textured 2D views onto 2D planes.

Referring to Figure 5, the method then creates a mask of each 2D view that comprises an unmasked central triangular portion with a positive opacity, and a pair of adjacent masked triangular portions with zero opacity (i.e. are transparent); the 2D view of object is placed in the unmasked portion (step 130).

As shown in Figure 6, each mask is placed over one of the 2D views such that the object is placed entirely within the unmasked triangular portion with the top of the object facing the top of the unmasked triangular portion. Referring to Figure 7, each masked 2D view is then placed on a projection plane such that the masked portions of each 2D view overlap with the masked portions of the adjacent masked 2D image, but the unmasked portions of each 2D view do not overlap (step 135). Because each masked portion has been assigned a 0 opacity, they will in effect be invisible on the projection plane. The resulting projection plane will feature each of the four 2D views of the object aligned at 90 degree increments around the centre of the projection plane, thereby producing a multi-view 2D composite image. More particularly, this multi-view image comprises a front view, rear view, a left side view and an opposed right side view of the object model. The front and back views are perpendicular to the left and right side views such that the views of the multi-view composite image form a right angled cross.

Referring to Figure 8, an orthographic overhead virtual camera is then used to capture an orthographic overhead view of the shaded multi-view composite image (step 140). Again, the Unity™ game can be utilized to provide this overhead virtual camera. A mirror script is executed to invert the X axis of the overhead virtual camera, thereby causing the overhead virtual camera to output a mirrored image (step 145) (which will be reflected back to the original image on the display structure 10). A "build" is exported into an application file ("app") that can be executed by the tablet computing device 6 (step 150).

The app is loaded onto the tablet computing device 6, which is communicatively coupled to the display apparatus input port 26. When the app is executed, the shaded multi-view composite image as shown in Figure 9 is output from the tablet computing device 6 as a video stream to the projection panel 1 1 via the input port 26, and the projection panel 1 1 projects each 2D view onto a respective surface 22(a)-(d) of the display structure 10 (step 155). The app can scale the output image to one of the compatible video output resolutions of the tablet computing device 6 if necessary. Referring to Figure 10, the projection panel 1 1 is aligned with the display structure 10 such that the front view projected onto the front face 22(a) of the display structure 10, the back view is projected onto the back face 22(b) of the display structure 10, the left side view is projected onto the left side face 22(c) of the display structure 10, and the right side view is projected onto the right side face 22(d) of the display structure 10. The resulting virtual 3D holographic image of the object can be seen from all sides of the pyramidal display structure 10; therefore somebody viewing the virtual 3D holographic image from the front of the display structure 10 would see the front of the object, and as the viewer walks clockwise around the display structure 10, the viewer would respectively see the right side, the back, the left side and then the front of the object. Utilizing the phenomena known as Pepper's Ghost effect, these multiple views of the object appear to be "floating" in space when projected on the semi-transparent display pyramid surfaces 22(a)-(d), thus giving the viewer the impression of seeing a 3D hologram of the object inside the display structure. The Unity™ game engine or other 3D modelling software can be used to animate the 3D model of the object, thus causing the virtual cameras to capture 2D views of the moving object, and causing the projected images on the display structure 10 to also be moving such that the virtual 3D holographic image of the object also appears to be moving.

Referring now to Figures 1 1 to 14, the position tracking module is provided to allow a handheld input device 8(a) to control the orientation of the holographic object displayed by the display apparatus 4, using magnetometers (not shown) in the input device 8(a) and the tablet computing device 6 to determine the orientation of the handheld device 8(a) relative to the orientation of the tablet computing device 6 then adjusting the orientation of the displayed holographic object when the orientation of the handheld device 8(a) is changed. The software programming can also include a "dynamic perspective correction" algorithm which allows a user holding the handheld device 8(a) to "walk around" the holographic object, by causing the tablet computing device 6 to rotate the holographic object in the opposite direction of the rotation of the handheld device 8(a).

The handheld device 8(a) can be a smartphone such as an Apple iPhone™ or any handheld device with a magnetometer and wireless communication means, such as Wi- Fi or Bluetooth. The tablet device 6 should also have a magnetometer and a wireless communications means, such that the handheld device 8(a) and tablet device 6 can communicate wirelessly; a suitable such tablet computing device is an Apple iPad™. As is well known in the art, the magnetometer in each of the handheld and tablet computing devices 8(a), 6 is an instrument used for measurement of magnetic forces, especially the Earth's magnetic field, and these measurements can be used to determine compass directions. More particularly, the magnetometer in each of the handheld and tablet computing devices 8(a), 6 can be used to determine the orientation of the devices 8(a), 6 as the magnetometer measures the strength of the magnetic field surrounding the devices 8(a), 6. In the absence of any strong local fields, these measurements will be of the ambient magnetic field of the Earth, allowing the devices 8(a), 6 to determine its "heading", the top of the devices 8(a), 6, with respect to the geomagnetic North Pole and act as a digital compass. It measures the heading in degrees from 0 to 359.00, where 0 is north. Referring to Figure 1 1 , third party networking protocols such as Open Sound Control (OSC) are available to one skilled in the art to develop a networking protocol between the handheld device 8(a) and tablet computing device 6. The networking protocol can be built using networking tools provided by the Unity™ game engine, wherein the tablet computing device 6 operates as a server, and the handheld device 8(a) operates as a client. The server can be a dedicated host machine used by all clients, or simply a "player" machine running a "game" (the client) but also acting as a server for other players. Once the server has been established and a client has connected to the server, the two computing devices 8(a), 6 can exchange data as demanded by "game play". NetworkView 30 is a component provided by the Unity™ game engine and can be used to send data across the network. The game objects that need to be networked a NetworkView component. NetworkView allows sending data using RPC (Remote Procedure Calls) 32, which is a message channel between the server and client, and which is used to invoke functions on other computers across the network.

Referring now to Figure 12, the position tracking module comprises program code that when executed by the processor, carries out the following steps: first, a data connection is established (e.g. via WiFi) between the tablet computing device 6 and the handheld device 8(a) and the respective magnetometers in each device 6, 8(a) are enabled (step 200). Then, the magnetometer in the tablet computing device 6(a) is read to obtain a tablet device heading value (step 205). The tablet device heading value is transmitted to the handheld device 8(a); the processor of the handheld device 8(a) receives the tablet device heading value as well as reads the magnetometer on the handheld device 8(a) to obtain a handheld device heading value and calculates a relative heading by subtracting the two heading values to obtain a relative heading value (step 210). Then the position tracking module will check if the relative heading value is less than 0 (step 215). If yes, a value of 360 will be added to the relative heading value to get a normalized value in the range 0 to 359.99 (step 220). If no, the value is already normalized and no action is taken. Then the normalized heading value can be used as a value to determine the handheld device's relative position to the tablet computing device 6. A variable, Position, is declared (step 225). If the normalized heading value is in the range [0, 45] and [315, 360], the position variable is declared to be Front. If the normalized heading value is in the range [45, 135], the position variable is declared to be Left. If the normalized heading value is in the range [225, 315], the position variable is declared to be Right. If the normalized heading value is in the range [135, 25], the position variable is declared to be Back. Then, the relative heading value and the Position value is transmitted back to tablet computing device 6 via Unity™ RPCs (step 230). The normalized heading value can be used as the rotation angle for the 3D object model, so that if a user holding the handheld device 8(a) moves around the display apparatus 4, the 3D object model in the tablet computing device 6 will also rotate, following the tablet computing device's rotation. In the Unity™ game engine, the rotation is expressed as Euler angles in degrees. The x, y, and z angles represent a rotation of z degrees around the z axis, x degrees around the x axis, and y degrees around the y axis.

To make 3D object models follow the user, the position tracking module includes program code which uses the relative heading values as the y component of the 3D object model's rotation value, and makes the x and z components of the 3D object model's rotation equal zero. This program code will make 3D object models only rotate according to the change in relative heading value and the declared Position variable (Step 235).

Update(){ transform. eulerAngles = new Vector3 (O.relative Heading, 0); }

Alternatively, the position tracking module can use a tablet computing device or handheld device's built-in gyroscope to enable rotation for the 3D object model. The gyroscope is a sensor built into some handheld devices to sense the rotation around three perpendicular axes - pitch, yaw, and roll. In summary, when the magnetometer readings are the same, the difference is zero, and the position tracking module determines that the tablet and handheld devices 6, 8(a) are in the same orientation and holographic image should be orientated in its default position (e.g. front view facing the front of the display apparatus 10). When the magnetometer readings are different, the position tracking module calculates the difference between the two readings, and determines an angular rotation of the holographic object that is a function of the difference in the magnetometer readings. In one embodiment, the angular rotation can be directly proportional to the angular difference between the tablet and handheld device orientations, e.g. a 90 degree difference between the devices results in a 90 degree rotation of the holographic object. In another embodiment, the program code further comprises a Dynamic Perspective Correction algorithm, which causes the holographic object to rotate opposite to the direction of rotation of the handheld device (both rotations about a vertical Y-axis). In other words, the Dynamic Perspective Correction algorithm causes the perspective of the 3D object model shown in the display apparatus 4 to shift thereby revealing new angles on what would have otherwise been a flat image.

Referring to Figure 13, the Dynamic Perspective Correction algorithm operates by multiplying a three-dimensional vector (0,-1 ,0) with the 3D object model's original rotational component, using the following equation:

3D model's rotation = vector3.scale( handheld device rotation, (0,-1 ,0)) wherein "vector3. scale" is a function that causes two vectors to be multiplied, which in this case the handheld device rotation vector is multiplied by vector (0, - 1 , 0).

Every component in the result is a component of the handheld device's rotation multiplied by the same component of vector (0, -1 , 0). The result is that the 3D object model will rotate opposite to the direction handheld device's rotation in the Y axis.

In effect, execution of the Dynamic Perspective Correction algorithm creates an illusion that a user holding the handheld device 8(a) and walking around the display apparatus 4 will also "walk around" the holographic object. This effect can be seen in the difference between Figures 13(a) and Figure 13(b). In Figure 13(a) the Dynamic Perspective Correction algorithm is applied to the 3D object model and thus the front 2D view of the object model is displayed even after the handheld device 8(a) is rotated. In Figure 13(b), the Dynamic Perspective Correction algorithm is applied to the object model and thus the "front view" of the object model shows a counter clockwise rotated view of the object model after the handheld device 8(a) has been rotated in a clockwise direction.

Referring now to Figures 14(a) and 14(b), the multi-modal interaction module is provided to receive and process input signals from the input device 8(b) and controls the display of the holographic object model based on these input signals. As noted above, the input device 8(b) can be used to control certain aspects of the holographic object display; and include a camera-based gesture input device (e.g. KinectTM, LeapMotionTM), microphone-based voice input device, and an Emotiv brain-input sensing devices. Each input device 8(b) should have an API and an SDK, and the multimodal interaction with holographic object module is programmed to interact with the input device API and use resources provided by the input device SDK, as will be described in further detail below.

In one embodiment and as shown in Figure 14(a), the external input device 8(b) is a Leap Motion™, and the tablet computing device 6 is an Apple iPad™. Because the present embodiment of the display apparatus 4 does not have its own processor and memory, the input device 8(b) is connected to a separate client computing device 9 (e.g. a laptop) by a data cable (e.g. USB); the client computing 9 acts as a bridging device to communicate with the tablet computing device 6 ("server computing device") that is connected to the display apparatus 10 via an HDMI cable. The client computing device 9 can communicate with the tablet computing device 6 wirelessly, such as by Wi- Fi or Bluetooth or some other communications protocol. In this embodiment, the client computing device 9 contains and executes the Multimodal Interaction with Holographic Object Module and the tablet computing 6 contains and executes the Rendering module (and optionally the Position Tracking module) and is further programmed to receive holographic object control commands from the Multimodal Interaction with Holographic Object module. The Leap Motion™ input device 8(b) includes a software development kit (SDK) which can be used to program an application that is executed on the client computing device 9 to receive and convert raw image data captured by the Leap Motion™ input device 9 into gesture data using Leap Motion's™ API. This gesture data is transmitted to the tablet computing device 6. The Rendering module on the tablet computing device 6 further includes programming to associate this gesture data with one or more control commands a library of commands stored on the tablet computing device 6. The control commands can be actions relating to the display of the object model, such as pinch inwards = decrease size of object model, or pinch outwards = increase size of object model.

Instead of a Leap Motion™, the input device 8(b) can be another gesture capture camera that comprises a video camera and a processor programmed to convert video images of a hand captured by the camera into skeleton frame data. The Multimodal Interaction with Holographic Object module is programmed to read the skeleton frame data via the input device API, and can determine whether the video captured by the camera represents a hand gesture by comparing the captured video corresponds to one of the gestures in a gesture library stored on the computing device. The gesture library can be provided by the input device SDK and can include, for example, a "Left Swipe" and "Right Swipe". Optionally, the Multimodal Interaction with Holographic Object Module can be programmed to create custom gestures that correspond to a certain hand movement, and this custom gesture can be stored in the gesture library. Once the Multimodal Interaction with Holographic Object Module reads the input device video input and determines the gesture that corresponds to the video input, the gesture is transmitted from the client computing device 9 to the tablet computing device 6. The tablet app on the tablet computing device 6 is programmed with a library of gesture commands that each correspond to a gesture. When the tablet computing device 6 receives a gesture from the client computing device 9, the tablet app will associate the gesture to the corresponding gesture command. For example, a "left swipe" gesture can be associated with "swipe left" command that causes the tablet app to move the holographic object in a leftwards direction.

In another embodiment and as shown in Figure 14(b), the input device 8(b) is

connected to a computing device 6 that is directly connected to the display apparatus 4 (e.g. via an HDMI cable), in which case the computing device 6 contains and executes both the Multimodal Interaction with Holographic Object module and the Rendering module (and optionally the Position Tracking module).

While particular embodiments have been described in the foregoing, it is to be understood that other embodiments are possible and are intended to be included herein. It will be clear to any person skilled in the art that modifications of and adjustments to the foregoing embodiments, not shown, are possible.