Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUGMENTED REALITY COLLABORATION SYSTEM
Document Type and Number:
WIPO Patent Application WO/2022/061037
Kind Code:
A1
Abstract:
A system comprising: a user device, comprising: sensors configured to sense data related to a physical environment of the user device, displays; hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the hardware processors to: place a virtual object in a 3D scene displayed by the second user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.

Inventors:
BAR-ZEEV AVI (US)
TYURIN ALEXANDER (US)
WRIGHT GERALD (US)
Application Number:
PCT/US2021/050746
Publication Date:
March 24, 2022
Filing Date:
September 16, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
META VIEW INC (US)
International Classes:
G06F3/01; G01S5/16; G06T15/20; G06T19/00
Foreign References:
US20170076499A12017-03-16
US20120210254A12012-08-16
US20160259404A12016-09-08
Attorney, Agent or Firm:
DUNNING, Jr., Richard, A. (US)
Download PDF:
Claims:
Claims

What is claimed is:

1 . A system comprising: a system configured to display virtual content, comprising: a user device, comprising: one or more sensors configured to sense data related to a physical environment of the user device, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: place a virtual object in a 3D scene displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the user device responsive to user input received by the user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.

2. The system of claim 1 , further comprising: a second virtual content system, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a second user device, wherein the second user device comprises: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: determine a pose of the second user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate a second image of the virtual content based on the pose of the second user device with respect to the structure, wherein the second image of the virtual content is projected by the one or more displays of the second user device in a predetermined location relative to the structure.

3. The system of claim 2, wherein: the first virtual content system is remote from the second virtual content system.

4. The system of claim 2, wherein the first virtual content system is proximate to the second virtual content system.

5. The system of claim 3, wherein the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the first user device; and add the virtual proxy to the second image based on (i) the pose of the first user device with respect to the virtual object, wherein the image is projected by the one or more displays of the second user device.

6. The system of claim 3, wherein the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the second user device; and add the virtual proxy to the first image based on (i) the pose of the second user device with respect to the structure, and (ii) the pose of the placed virtual object with respect to the structure, wherein the image is projected by the one or more displays of the first user device.

7. The system of claim 2, wherein: the plurality of radiation emitters is configured to emit infrared light; the one or more sensors on the second user device are configured to sense the output of the plurality of radiation emitters; and the instructions are further executable by the one or more hardware processors in the second user device to determine the pose of the second user device with respect to the structure based on the sensed output of the plurality of radiation emitters.

8. The system of claim 1 , wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the user device, at least one of: selecting the virtual content, and modifying the virtual content.

9. The system of claim 2, wherein the second user device may be worn on the head of a user.

10. The system of claim 1 , further comprising: a handheld electronic device; wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the handheld electronic device, at least one of: selecting the virtual content, and modifying the virtual content.

11 . The system of claim 2, wherein the second user device further comprises at least one optical element which facilitates viewing of the second image integrated with a view of the physical environment.

12. The system of claim 2, wherein the structure is attached to a physical object.

13. The system of claim 1 , comprising: a second system configured to display virtual content, comprising: a second user device, comprising: a second set of one or more sensors configured to sense data related to a physical environment of the second user device, and a second set of one or more displays; wherein the instructions are further executable by the one or more hardware processors to: place a second virtual object in 3D scene displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the second user device responsive to user input received by the second user device, determine a pose of the second user device with respect to the physical location in the physical environment of the second user device, and generate a second image of the virtual content based on the pose of the second user device with respect to the placed virtual object, wherein the second image of the virtual content is displayed by the one or more displays of the second user device in a predetermined location relative to the placed second virtual object.

14. The system of claim 13, wherein: the first virtual content system is remote from the second virtual content system.

15. The system of claim 13, wherein the first virtual content system is proximate to the second virtual content system.

16. A non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for a user device in a system configured to display virtual content: sense a representation of a physical environment of the user device; place a virtual object in a 3D scene displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the user device responsive to user input received by the user device; determine a pose of the user device with respect to the physical location in the physical environment of the user device ; and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the second user device.

17. The non-transitory machine-readable storage medium of claim 16, wherein the instructions are further executable by the one or more hardware processors to, for a second user device in a second system configured to display the virtual content: sense outputs of a plurality of radiation emitters arranged in a structure in a predetermined pattern; determine a pose of the second user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate a second image of the virtual content based on the pose of the second user device with respect to the structure, wherein the second image of the virtual content is projected by one or more displays of the second user device in a predetermined location relative to the structure.

18. The non-transitory machine-readable storage medium of claim 17, wherein the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the first user device; and add the virtual proxy to the second image based on the pose of the first user device with respect to the virtual object, wherein the image is projected by the one or more displays of the second user device.

19. The non-transitory machine-readable storage medium of claim 17, wherein the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the second user device; and add the virtual proxy to the first image based on the pose of the second user device with respect to the structure, wherein the image is displayed by the one or more displays of the first user device.

20. The non-transitory machine-readable storage medium of claim 17, wherein: the plurality of radiation emitters is configured to emit infrared light; the instructions are further executable by the one or more hardware processors of the second user device to: sense the output of the plurality of radiation emitters, and determine the pose of the second user device with respect to the structure based on the sensed output of the plurality of radiation emitters.

21 . The non-transitory machine-readable storage medium of claim 16, wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the user device, at least one of: selecting the virtual content; and modifying the virtual content.

22. The non-transitory machine-readable storage medium of claim 17, wherein second user device comprises at least one optical element, which facilitates viewing of the second image integrated with the second user’s view of physical environment.

37

AMENDED CLAIMS received by the International Bureau on 15 December 2021 (15.12.2021)

1. A system comprising: a virtual content system configured to display virtual content, comprising: a first user device configured to display virtual content, comprising: one or more sensors configured to sense data related to a physical environment of the user device, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: generate an image representing a physical environment of the first user device, place virtual content in a location in the image corresponding to a physical location in the physical environment of the first user device responsive to user input received by the first user device, determine a pose of the first user device with respect to the physical location based on the data sensed by the one or more sensors, generate an image of the virtual content based on the pose of the first user device with respect to the, generate a virtual proxy of a second user device that is viewing a second image of the virtual content in the second user device, with a position of the virtual proxy based on a pose of the second user device with respect to the placed virtual content, add the virtual proxy to the image, and render the image of the virtual content and the virtual proxy by the one or more displays of the first user device.

2. The system of claim 1 , further comprising: a second virtual content system, comprising:

AMENDED SHEET (ARTICLE 19) 38 a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and the second user device, wherein the second user device comprises: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: determine a pose of the second user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate a second image of the virtual content based on the pose of the second user device with respect to the structure, wherein the second image of the virtual content is projected by the one or more displays of the second user device in a predetermined location relative to the structure.

3. The system of claim 2, wherein: the first user device is remote from the second user device.

4. The system of claim 2, wherein the first user device is proximate to the second user device.

5. The system of claim 3, wherein the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the first user device; and add the virtual proxy to the second image based on (i) the pose of the first user device with respect to the virtual object, wherein the image is projected by the one or more displays of the second user device.

6. (Canceled)

AMENDED SHEET (ARTICLE 19)

7. The system of claim 2, wherein: the plurality of radiation emitters is configured to emit infrared light; the one or more sensors on the second user device are configured to sense the output of the plurality of radiation emitters; and the instructions are further executable by the one or more hardware processors in the second user device to determine the pose of the second user device with respect to the structure based on the sensed output of the plurality of radiation emitters.

8. The system of claim 1 , wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the user device, at least one of: selecting the virtual content, and modifying the virtual content.

9. The system of claim 2, wherein the second user device may be worn on the head of a user.

10. The system of claim 1 , further comprising: a handheld electronic device; wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the handheld electronic device, at least one of: selecting the virtual content, and modifying the virtual content.

11 . The system of claim 2, wherein the second user device further comprises at least one optical element which facilitates viewing of the second image integrated with a view of the physical environment.

12. The system of claim 2, wherein the structure is attached to a physical object.

AMENDED SHEET (ARTICLE 19)

13. The system of claim 1 , comprising: a second system configured to display virtual content, comprising: the second user device, comprising: a second set of one or more sensors configured to sense data related to a physical environment of the second user device, and a second set of one or more displays; wherein the instructions are further executable by the one or more hardware processors to: place second virtual content displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the second user device responsive to user input received by the second user device, determine a pose of the second user device with respect to the physical location in the physical environment of the second user device, and generate a second image of the virtual content based on the pose of the second user device with respect to the placed virtual object, wherein the second image of the virtual content is displayed by the one or more displays of the second user device in a predetermined location relative to the placed second virtual object.

14. The system of claim 13, wherein: the first virtual content system is remote from the second virtual content system.

15. The system of claim 13, wherein the first virtual content system is proximate to the second virtual content system.

16. A non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to, for a first user device in a system configured to display virtual content: generate an image representing a physical environment of the first user device;

AMENDED SHEET (ARTICLE 19) place virtual content in location in the image corresponding to a physical location in the physical environment of the first user device responsive to user input received by the first user device; determine a pose of the first user device with respect to the physical location in the physical environment of the first user device; and generate an image of the virtual content based on the pose of the first user device with respect to the physical location, generate a virtual proxy of a second user device that is viewing a second image of the virtual content in the second user device, with a position of the virtual proxy based on a pose of the second user device with respect to the placed virtual content, add the virtual proxy to the image, and render the image of the virtual content and the virtual proxy by the one or more displays of the first user device.

17. The non-transitory machine-readable storage medium of claim 16, wherein the instructions are further executable by the one or more hardware processors to, for a second user device in a second system configured to display the virtual content: sense outputs of a plurality of radiation emitters arranged in a structure in a predetermined pattern; determine a pose of the second user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate a second image of the virtual content based on the pose of the second user device with respect to the structure, wherein the second image of the virtual content is projected by one or more displays of the second user device in a predetermined location relative to the structure.

18. The non-transitory machine-readable storage medium of claim 17, wherein the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the first user device; and

AMENDED SHEET (ARTICLE 19) 42 add the virtual proxy to the second image based on the pose of the first user device with respect to the virtual object, wherein the image is projected by the one or more displays of the second user device.

19. (Canceled)

20. The non-transitory machine-readable storage medium of claim 17, wherein: the plurality of radiation emitters is configured to emit infrared light; the instructions are further executable by the one or more hardware processors of the second user device to: sense the output of the plurality of radiation emitters, and determine the pose of the second user device with respect to the structure based on the sensed output of the plurality of radiation emitters.

21. The non-transitory machine-readable storage medium of claim 16, wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the user device, at least one of: selecting the virtual content; and modifying the virtual content.

22. The non-transitory machine-readable storage medium of claim 17, wherein the second user device comprises at least one optical element, which facilitates viewing of the second image integrated with the second user’s view of physical environment.

AMENDED SHEET (ARTICLE 19)

Description:
AUGMENTED REALITY COLLABORATION SYSTEM Cross-Reference to Related Applications

[0001] The present application claims priority to U.S. Patent Application No. 17/023,220, filed September 16, 2020, the content of which are incorporated by reference herein in its entirety.

Description of Related Art

[0002] The disclosed technology relates generally to augmented reality systems, and more particularly, some embodiments relate to collaboration using such systems.

Summary

[0003] A claimed solution rooted in computer technology overcomes problems specifically arising in the realm of computer technology.

[0004] In general, one aspect disclosed features a system comprising: a system configured to display virtual content, comprising: a user device, comprising: one or more sensors configured to sense data related to a physical environment of the user device, and one or more displays; one or more hardware processors; and a non- transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: place a virtual object in a 3D scene displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the user device responsive to user input received by the user device, determine a pose of the user device with respect to the physical location in the physical environment of the user device, and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by the one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the user device.

[0005] Embodiments of the system may include one or more of the following features. Some embodiments comprise a second virtual content system, comprising: a structure comprising a plurality of radiation emitters arranged in a predetermined pattern, and a second user device, wherein the second user device comprises: one or more sensors configured to sense outputs of the plurality of radiation emitters, and one or more displays; one or more hardware processors; and a non-transitory machine-readable storage medium encoded with instructions executable by the one or more hardware processors to: determine a pose of the second user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate a second image of the virtual content based on the pose of the second user device with respect to the structure, wherein the second image of the virtual content is projected by the one or more displays of the second user device in a predetermined location relative to the structure. In some embodiments, the first virtual content system is remote from the second virtual content system. In some embodiments, the first virtual content system is proximate to the second virtual content system. In some embodiments, the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the first user device; and add the virtual proxy to the second image based on (i) the pose of the first user device with respect to the virtual object, wherein the image is projected by the one or more displays of the second user device. In some embodiments, the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the second user device; and add the virtual proxy to the first image based on (i) the pose of the second user device with respect to the structure, and (ii) the pose of the placed virtual object with respect to the structure, wherein the image is projected by the one or more displays of the first user device. In some embodiments, the plurality of radiation emitters is configured to emit infrared light; the one or more sensors on the second user device are configured to sense the output of the plurality of radiation emitters; and the instructions are further executable by the one or more hardware processors in the second user device to determine the pose of the second user device with respect to the structure based on the sensed output of the plurality of radiation emitters. In some embodiments, the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the user device, at least one of: selecting the virtual content, and modifying the virtual content. In some embodiments, the second user device may be worn on the head of a user. Some embodiments comprise a handheld electronic device; wherein the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the handheld electronic device, at least one of: selecting the virtual content, and modifying the virtual content. In some embodiments, the second user device further comprises at least one optical element which facilitates viewing of the second image integrated with a view of the physical environment. In some embodiments, the structure is attached to a physical object. Some embodiments comprise a second system configured to display virtual content, comprising: a second user device, comprising: a second set of one or more sensors configured to sense data related to a physical environment of the second user device, and a second set of one or more displays; wherein the instructions are further executable by the one or more hardware processors to: place a second virtual object in 3D scene displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the second user device responsive to user input received by the second user device, determine a pose of the second user device with respect to the physical location in the physical environment of the second user device, and generate a second image of the virtual content based on the pose of the second user device with respect to the placed virtual object, wherein the second image of the virtual content is displayed by the one or more displays of the second user device in a predetermined location relative to the placed second virtual object. In some embodiments, the first virtual content system is remote from the second virtual content system. In some embodiments, the first virtual content system is proximate to the second virtual content system.

[0006] In general, one aspect disclosed features a non-transitory machine- readable storage medium encoded with instructions executable by the one or more hardware processors to, for a user device in a system configured to display virtual content: sense a representation of a physical environment of the user device; place a virtual object in a 3D scene displayed by the second user device in a virtual location that corresponds to a physical location in the physical environment of the user device responsive to user input received by the user device; determine a pose of the user device with respect to the physical location in the physical environment of the user device ; and generate an image of virtual content based on the pose of the user device with respect to the placed virtual object, wherein the image of the virtual content is projected by one or more displays of the user device in a predetermined location relative to the physical location in the physical environment of the second user device.

[0007] Embodiments of the non-transitory machine-readable storage medium may include one or more of the following features. In some embodiments, the instructions are further executable by the one or more hardware processors to, for a second user device in a second system configured to display the virtual content: sense outputs of a plurality of radiation emitters arranged in a structure in a predetermined pattern; determine a pose of the second user device with respect to the structure based on the sensed outputs of the plurality of radiation emitters, and generate a second image of the virtual content based on the pose of the second user device with respect to the structure, wherein the second image of the virtual content is projected by one or more displays of the second user device in a predetermined location relative to the structure. In some embodiments, the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the first user device; and add the virtual proxy to the second image based on the pose of the first user device with respect to the virtual object, wherein the image is projected by the one or more displays of the second user device. In some embodiments, the instructions are further executable by the one or more hardware processors to: generate a virtual proxy of the second user device; and add the virtual proxy to the first image based on the pose of the second user device with respect to the structure, wherein the image is displayed by the one or more displays of the first user device. In some embodiments, the plurality of radiation emitters is configured to emit infrared light; the instructions are further executable by the one or more hardware processors of the second user device to: sense the output of the plurality of radiation emitters, and determine the pose of the second user device with respect to the structure based on the sensed output of the plurality of radiation emitters. In some embodiments, the instructions are further executable by the one or more hardware processors to perform, responsive to user input received by the user device, at least one of: selecting the virtual content; and modifying the virtual content, wherein second user device comprises at least one optical element, which facilitates viewing of the second image integrated with the second user’s view of physical environment.

Brief Description of the Drawings [0008] The present disclosure, in accordance with one or more various embodiments, is described in detail with reference to the following figures. The figures are provided for purposes of illustration only and merely depict typical or example embodiments.

[0009] FIG. 1 illustrates an augmented reality collaboration system featuring a physical holopad according to some embodiments of the disclosed technology.

[0010] FIG. 2 illustrates the use of an augmented reality collaboration system featuring a physical holopad in conjunction with a computer-aided design (CAD) system according to some embodiments of the disclosed technology.

[0011] FIG. 3 is a perspective view of the interior of a physical holopad according to some embodiments of the disclosed technology.

[0012] FIG. 4 is a perspective view of a headset according to some embodiments of the disclosed technology.

[0013] FIG. 5 is a perspective view of an add-on device according to some embodiments of the disclosed technology.

[0014] FIG. 6 is a perspective view of a user device with the add-on device attached according to some embodiments of the disclosed technology.

[0015] FIG. 7 illustrates an augmented reality collaboration system featuring a virtual holopad according to some embodiments of the disclosed technology.

[0016] FIG. 8 illustrates some example networked systems according to embodiments of the disclosed technology.

[0017] FIG. 9 is a flowchart of a process of a physical holopad system according to some embodiments of the disclosed technology.

[0018] FIG. 10 is a flowchart of a process of a virtual holopad system according to some embodiments of the disclosed technology. [0019] FIG. 11 is a flowchart for a process for providing, to a second holopad system user, a virtual proxy of a user in a first holopad system, according to embodiments of the disclosed technology.

[0020] FIG. 12 is an example computing component that may be used to implement various features of embodiments described in the present disclosure.

[0021] The figures are not exhaustive and do not limit the present disclosure to the precise form disclosed.

Detailed Description

[0022] With continued advances in computing and networking technology, computer systems are providing better ways for people to collaborate virtually while collocated or in remote locations. At the same time, augmented reality systems are providing better ways for people to view and modify three-dimensional content. However, attempts to combine these systems have proven unsuccessful. Current systems fail to deliver sufficient visual quality, generally providing both low image quality and an insufficient field of view. These systems also generally lack an ability to provide close-up views, and don’t accommodate users wearing eyeglasses. And these systems don’t allow users to see each other’s eyes, further diminishing the collaborative experience.

[0023] Furthermore, these systems provide insufficient input and control features for users. Current systems are generally complex and ergonomically challenging, for example making text entry difficult, and confronting new users with a steep learning curve. In addition, current systems require extensive preparation for use. For example, many of these systems require data preparation, software development, gesture recognition technologies, and/or time-consuming room and surface scans - and the computing power to perform all of these functions, often simultaneously - before and during each use. Finally, many of these systems are incompatible with current collaboration systems such as existing collaborative meeting solutions.

[0024] Rising to these challenges, some embodiments of the disclosed technology provide augmented reality collaboration systems featuring physical devices that designate a projection location for virtual content displayed by the collaboration system, also referred to herein as physical “holopads.” In these embodiments, images of virtual content are displayed to the users in a predetermined location relative to the physical holopad, such that the virtual content resembles a hologram projected by the holopad. The users may view and interact with virtual content using a variety of user devices, including headsets, tablets, smartphones, controllers, gestures and gesture recognition coupled with virtual control systems, and the like. Each physical holopad not only provides position and orientation information for the system, but also serves as an intuitive physical point of reference for the users. While conventional two-dimensional collaboration systems make it clear where virtual content will appear, shared holographic experiences can appear anywhere, and therefore don’t make it clear to users where to look. With the disclosed holopads, the disclosed technology acts as a shared 3D monitor, making it clear to users of the collaboration system where virtual content will appear, thus facilitating the purpose and effectiveness of the system. In some embodiments, the disclosed technology permits 3D videoconferencing, with a holopad at each location. These embodiments enable geographically-distributed users to feel as though they are all collaborating around the same table.

[0025] In some embodiments, each physical holopad may be a structure that includes a plurality of radiation emitters, for example such as infrared emitters, arranged in a known pattern. The holopad may be placed in a central location about which users may gather in a single location, for example such as a conference room table. Sensors on each user device may detect emissions from the emitters to determine the pose of the headset with respect to the holopad. As used herein, the “pose” of an object represents its spatial relationship with another object, including position and orientation.

[0026] In other embodiments, sensors on a user device may determine the pose of the headset with respect to the physical holopad using object or image recognition technologies. In such embodiments, the physical holopad need not include radiation emitters.

[0027] In any embodiment, determining a pose may include processing outputs of inertial monitoring units (IMUs) in the user devices. For example, a user device that takes the form of a headset may include an IMU. Sensors such as cameras typically allow position fixes at relatively low frequencies of around 60Hz. Conversely, IMUs may provide position changes at a very high rate, for example at a frequency of 200 Hz or higher. The fusion of these two technologies and their relative outputs allows the collaboration system to achieve highly-accurate position fixes, while also allowing it to make highly-accurate predictions of the user’s future positions.

[0028] The system may generate an image of the virtual content for each user device based on its pose with respect to the physical holopad. The user devices may display the images in a predetermined location relative to the physical holopad. For example, the virtual content may appear to the users to hover above the holopad as though the holopad were projecting a hologram of the virtual content.

[0029] Some embodiments of the disclosed technology provide augmented reality collaboration systems featuring virtual holopads. In these embodiments, users may view and interact with virtual content using a variety of user devices, including headsets, tablets, smartphones, and the like, without the use of a physical holopad. Instead of placing a physical structure on a surface, the user may employ the user device and surface tracking software (such as ARKit or ARCore) to place a virtual holopad on the surface. These embodiments are ideal for environments where a physical holopad may not be available or convenient, such as a hotel room. Like the physical holopad, the virtual holopad serves as a visual point of reference for the user as well as a means for aligning the coordinate system of the user’s environment with that of remote systems that include a physical or virtual holopad. As with the physical holopad, the system generates an image of the virtual content for display on a user device in a predetermined location relative to the virtual holopad. For example, the virtual content may appear to the users to hover above the virtual holopad as though the virtual holopad were projecting a hologram of the virtual content. Regardless of user location and whether a physical or virtual holopad is being used, all users see the virtual content in the same position relative to the holopad, thereby thus creating the perception of the system as presenting a shared 3D/holographic monitor.

[0030] In some embodiments, one or more of the disclosed augmented reality systems may be interconnected to share virtual content for communication or collaboration. For example, the interconnected systems may be networked, and may include one or more of the physical holopad systems and one or more of the virtual holopad systems. In such embodiments, each user may have one or more permissions. For example, the permissions may include permissions to view or edit the virtual content.. The users may change permissions during the collaboration, according to user input and/or specific parameters that may be set on the system at the outset or during the collaboration session. Any of the systems may be remote from one or more other systems. Any of the virtual holopad systems may be collocated with other virtual or physical holopad systems.

[0031] FIG. 1 illustrates an augmented reality collaboration system 100 featuring a physical holopad according to some embodiments of the disclosed technology. Referring to FIG. 1 , the system 100 includes a physical holopad 102 and a plurality of user devices employed by three users 104a,b,c to view and interact with virtual content 106, which in this example represents a building. In this example, users 104a,b employ headsets 108a,b to view the virtual content 106, while user 104c employs a tablet 110 to view the virtual content 106.

[0032] In the example of FIG. 1 , users 104b,c are local, and are physically present in the same room as the holopad 102, and user 104a is remote, in another location, as indicated by broken lines. A virtual proxy of the remote user 104a may be displayed to the local users 104b,c as part of the virtual content 106, for example as described below in detail. In the example of FIG. 1 , the virtual proxy represent the headset 108a of the remote user 104a.

[0033] As noted above, the pose of each user device with respect to the physical holopad 102 may be determined by sensing emitters on the holopad, or by object or image recognition. T o facilitate use of the emitters, the tablet 110 may include sensors, or may be equipped with an add-on device that includes the sensors, for example as described below.

[0034] The system may include one or more computer systems 116 each featuring a memory and one or more processors executing instructions stored in the memory to generate images of the virtual content 106 for display to the users 104 on the user devices, which may be tethered to the computer systems 116 for power and communications. For example, the headsets may be tethered to the computer system, while the mobile devices may run on their own batteries and compute power. In some embodiments, some or all of this processing may take place at the user devices, which may be battery-powered and contain any and all processors and communication equipment needed to function in the collaboration system. In some embodiments, the user devices may have wired connections to the computer, for example to provide high image quality, compute power, and/or electrical power. In some embodiments, the user devices may have wireless connections to the computer, such as Bluetooth, WiFi, and the like, for example to allow increased mobility of the users 104.

[0035] The users 104 may employ their user devices, or a peripheral device that is directly or indirectly connected to their user devices, to interact with the virtual content 106. For example, a smartphone may include an app that provides tools for interacting with the virtual content. In the example of FIG. 1 , user 104b is shown using a smartphone 112b to interact with the content using a beam 114. In some embodiments, the beam 114 may be a virtual beam generated by the system and visible on the user devices. Interaction with the virtual content may include selection of the virtual content, modification of the virtual content, annotation of the virtual content, and the like.

[0036] In some embodiments, the other users 104a, c may also interact with the virtual content 106, either sequentially or concurrently. For example, user 104a may employ a smartphone 112a to interact with the virtual content 106, while user 104c may employ tablet 110. In some embodiments, the users 104a,b may employ their headsets 108a,b to interact with the virtual content 106. In such embodiments, the headsets may include user interfaces including tactile control, gaze control, voice command, and the like. As noted below, in gesture-controlled systems, the headsets 108 may include sensors for detecting the gestures. [0037] In some embodiments, the disclosed systems may be used in conjunction with other systems. For example, the systems may include applications for creating virtual content using 3D data from existing 3D tools such as computer- aided design (CAD) systems. FIG. 2 illustrates the use of an augmented reality collaboration system 200 featuring a physical holopad in conjunction with a CAD system according to some embodiments of the disclosed technology. Referring to FIG. 2, the system 200 includes a physical holopad 202 and a headset 208 for viewing virtual content 206, which in this example represents a rocket engine. In this example, the system 200 also includes a CAD system 220. The views generated by the CAD system 220 may be coordinated with the views of the virtual content 206. For example, manipulation of one view may affect both views, for example in real time.

[0038] While in FIG. 2 the physical holopad 202 is shown as a standalone unit, the disclosed physical holopads may take any physical form, and may be attached to, or integrated with, one or more physical objects. In FIG. 2, three examples are shown. The physical holopad 202 is a cruciform design. A physical holopad 210 is attached to, or integrated with, the frame of a computer monitor. A physical holopad 212 is attached to, or integrated with, a stand 214 for holding the headsets 208.

[0039] FIG. 3 is a perspective view of the interior of a physical holopad 300 according to some embodiments of the disclosed technology. In these embodiments, the holopad has a cruciform design and four radiation emitters 302a,b,c,d disposed within respective arms of the physical holopad 300. However, as noted above, the disclosed physical holopads may take any physical shape. In some embodiments, the emitters 302 are arranged in a predetermined pattern that is known to the system. These arrangements facilitate determining poses of the user devices with respect to the physical holopad 300. [0040] FIG. 4 is a perspective view of a headset 400 according to some embodiments of the disclosed technology. Referring to FIG. 4, the headset 400 may include a sensor 402 configured to capture emissions of the emitters of the physical holopads. For example, when the emitters are infrared emitters, the sensor 402 may include an infrared camera configured to capture images of the emitters.

[0041] The headset 400 may include one or more optical elements. The optical element(s) may be implemented as any reflective translucent optic, for example as shown at 404. For example, the optical element(s) 404 may be implemented as an off- axis optic, birdbath optic, prismatic optic, planar optic (i.e. , a “waveguide”), or the like. In some of these embodiments, the optical elements are designed to permit other collocated users to see the eyes of the wearer. The translucent optic could be user replaceable with reflective opaque optic for a virtual reality experience.

[0042] In some embodiments, the optical element 404 may be replaced or augmented by a display to allow for pass-through augmented reality or virtual reality experiences. In such embodiments, the display may be a LCD, LED, OLED, or other display, that is positioned in the user device in front of the user’s eyes. In such embodiments, the display facilitates viewing the image of the virtual content integrated with the physical environment. In some of these embodiments, the headset 400 includes a camera for capturing images of the physical environment, and these images are added to the images of virtual content so the wearer sees the virtual content integrated with the physical environment. In such embodiments, the front of the headset may be opaque.

[0043] The headset 400 may include a head attachment mechanism 406. The head attachment mechanism 406 may be implemented in any manner. For example, the head attachment mechanism 406 may be implemented as a headband or the like. In some embodiments, the optical element 404 is positioned so the headset 400 provides ample room to accommodate users wearing eyeglasses.

[0044] In some embodiments, the user device for viewing the virtual content may be implemented as a handheld computer, such as a smartphone, tablet, or the like. In some embodiments, these handheld user devices may include sensors capable of detecting the emitters of the physical holopad. In some embodiments, where the user devices may not be capable of such sensing, some embodiments may provide or be configured to accept an add-on device capable of performing the sensing.

[0045] FIG. 5 is a perspective view of an add-on device 500 according to some embodiments of the disclosed technology. Referring to FIG. 5, the add-on device 500 may include an attachment device 502 configured to attach the add-on device 500 to a user device. In the example of FIG. 5, the attachment device 502 is implemented as a clamp. However, the attachment device 502 may attach to the user device in any manner.

[0046] FIG. 6 is a perspective view of a user device 600 with the add-on device 500 attached according to some embodiments of the disclosed technology. In the example of FIG. 6, the user device is implemented as a smartphone. However, the user device 600 may be any device capable of performing the functions described herein.

[0047] In some embodiments, the add-on device 500 may be configured to communicate wirelessly with the user device 600. For example, the add-on device 500 may include a Bluetooth transceiver, NFC device, WiFi device, or the like. In some embodiments, the add-on device 500 may have a wired communication connection to the user device 600. For example, the add-on device 500 may have a connector configured to connect to a connector of the user device 600. In some embodiments, there is no wired or wireless communications connection directly between add-on device 500 and user device 600. Instead, the add-on device 500 may communicate to a computer network, to which user device 600 is also connected, for example via that user device’s cellular or internet connection (e.g., via WiFi). In some embodiments, the add-on device 500 may include an NFC tag to pair the add-on device 500 and the user device 600 for direct communications. For example, a user may enter a conference room, pick up an add-on device 500, which pairs with the user device 600, identifies itself to the user device 600, which automatically opens up a controller app on the user device 600.

[0048] In some embodiments, the add-on device 500 may have a power source independent of the power source of the user device 600. In some embodiments, the add-on device 500 may be configured to draw power from the user device 600. In some embodiments, the add-on device 500 may be configured to provide power to the user device 600. All combinations of the described configurations are contemplated.

[0049] FIG. 7 illustrates an augmented reality collaboration system 700 featuring a virtual holopad 702 according to some embodiments of the disclosed technology. Referring to FIG. 7, the system 700 includes a user device that a user can use to view and interact with virtual content 706, which in this example represents a rocket engine. In the example of FIG. 7, the user device is implemented as a tablet 710. However, the user device may be any device capable of performing the functions described herein. For example, the user device may be implemented as a headset, smartphone, laptop, and the like.

[0050] A user may employ the tablet 710 to place a virtual object in a 3D scene displayed by the tablet 710 in a virtual location that corresponds to a physical location in the user’s physical environment. In this example, the user has placed the virtual holopad 702 in the virtual location corresponding to the surface of table 720. Of course, the user may place the virtual holopad 702 in virtual locations corresponding to other surfaces in the user’s physical environment instead.

[0051] As with the physical holopad, the virtual holopad serves as a point of reference for the user. In some embodiments, the system may determine the pose of the tablet 710 with respect to the physical surface indicated by the virtual holopad 702. For example, an application executing on the tablet 710 may use one or more sensors to scan and track surfaces in the physical environment, and to determine the physical location indicated by the virtual holopad with respect to the tablet 710.

[0052] The system generates an image of the virtual content 706 and the virtual holopad 702 based on the pose of the tablet 710 with respect to the surface of table 720 selected by the user and enabled by surface tracking software implemented by the tablet 710. The tablet 710 may then display the image in a predetermined location relative to the virtual holopad 702. For example, the virtual content may appear to the user to hover above the virtual holopad as though the holopad were projecting a hologram of the virtual content. The image may include an image of the virtual holopad 702. The image may include an image of the physical environment, as shown in FIG. 7, where the image includes an image of the table 720.

[0053] FIG. 8 illustrates some example networked systems according to embodiments of the disclosed technology. One or more of the systems may include a wireless router to connect one or more components of the system. The wireless router may employ any wireless technology, for example including WiGig, WiFi6, 5G mmWave, and the like.

[0054] Several example system configurations are shown, interconnected by a network 812. However, it should be appreciated that other system configurations may be employed and interconnected in any number. While the user devices are illustrated as headsets, any sort of user device may be employed. A configuration including two users and a physical holopad in one room is shown at 804. In configuration 804, both users are viewing the same virtual content. A configuration including two users and a first physical holopad in one room, and one user and a second physical holopad in a second room, is shown at 806. In configuration 806, all three users are viewing the same virtual content. A configuration including one user and a first physical holopad in one room, and one user and a second physical holopad in a second room, is shown at 808. In configuration 808, both users are viewing the same virtual content. A configuration including one user and a virtual holopad, indicated by a broken line, in one room is shown at 810.

[0055] FIG. 9 is a flowchart of a process 900 of a physical holopad system according to some embodiments of the disclosed technology. For example, the process 900 may be performed by the physical holopad system 100 of FIG. 1. For clarity, the process 900 is described for one user device. However, the process 900 may be performed for each user device in a physical holopad system.

[0056] Referring to FIG. 9, the process 900 may include sensing outputs of a plurality of radiation emitters disposed in a structure, at 902. In the example of FIG. 1 , each of the headsets 108 may include one or more sensors, for example such as the sensor 402 of FIG. 4. In gesture-controlled systems, the headsets 108 may include additional sensors for detecting the gestures. Each user device may include a sensor, for example integrated in the user device or as part of an add-on device such as the add-on device 500 of FIG. 5. The emitters may be infrared emitters, and the sensor may be an infrared camera that captures an image of the infrared emitters. [0057] Referring again to FIG. 9, the process 900 may include determining a pose of the user device with respect to the structure based on the sensed outputs of the radiation emitters and outputs of the IMU, at 904. Continuing with the example of FIG. 1 , the computer system(s) 116 may determine a pose of the user device 108 with respect to the physical holopad 102.

[0058] Referring again to FIG. 9, the process 900 may include generating an image of virtual content based on the pose of the user device with respect to the structure, at 906. Continuing with the example of FIG. 1 , the computer system(s) 116 may generate an image of the virtual content 106 for the user device 108 based on the pose of the user device 108 with respect to the holopad 102.

[0059] Each user device may have a different pose with respect to the holopad 102. Therefore each image may be different, and may present the virtual content as viewed from the respective pose. The system may fix the orientation of the virtual content with respect to the holopad 102, and may update the images frequently. For example, the images may be refreshed at a high rate using complex algorithms such as fused camera and inertial sensors with prediction based on head motion models. Therefore, the virtual content may appear to remain stationary, so that the users may move about the room to view the virtual content from different perspectives.

[0060] For user devices that permit stereoscopic viewing, the system may generate a pair of images. Such user devices may display the pair of images concurrently as a stereoscopic image of the virtual content.

[0061] Referring again to FIG. 9, the process 900 may include displaying an image on an optical element of the user device in a predetermined location relative to the structure, at 908. Continuing the example of FIG. 1 , the computer system(s) 116 may transmit the image to the user device, and responsive to receiving the image, the user device may display the image on its display or optical element. The process 900 may repeat, providing a seamless viewing experience to the user.

[0062] FIG. 10 is a flowchart of a process 1000 of a virtual holopad system according to some embodiments of the disclosed technology. For example, the process 1000 may be performed by the virtual holopad system 700 of FIG. 7. For clarity, the process 1000 is described for one user device. However, the process 1000 may be performed for each user device in the virtual holopad system.

[0063] Referring to FIG. 10, the process 1000 may include capturing data related to the physical environment, at 1002. In the example of FIG. 7, the tablet 710 may include a sensor for capturing the image. The sensor may be implemented as a camera, depth sensor, or the like, or combinations thereof.

[0064] Referring again to FIG. 10, In the example of FIG. 7, the tablet 710 may include a sensor for capturing the image. The sensor may be implemented as a camera, depth sensor, or the like, or combinations thereof.

[0065] Referring again to FIG. 10, the process 1000 may include determining a pose of the user device with respect to the physical environment, at 1003, and placing a virtual object in a 3D scene displayed by the user device in a virtual location that corresponds to a physical location in the physical environment responsive to user input received by the user device, at 1004. Continuing with the example of FIG. 7, a user may employ the tablet 710 to place the virtual holopad 702 on the table 720. The virtual holopad 702 then appears to remain in that location even as the tablet 710 is moved about by the user.

[0066] Referring again to FIG. 10, the process 1000 may include determining a pose of the user device with respect to the physical location of the placed virtual object, at 1006. Continuing with the example of FIG. 7, the tablet 710 may determine a pose of the tablet 710 with respect to the virtual holopad 702. That is, the tablet 710 may determine a pose of the tablet 710 with respect to the physical location on the table 720 where the virtual holopad 702 has been placed in the virtual environment. This pose may be determined in any suitable manner. For example, the pose may be determined by a commercially-available application such as ARKit or ARCore, which creates a virtual map of the surfaces and other notable physical characteristics of the user’s physical environment that the application is programmed to recognize.

[0067] Referring again to FIG. 10, the process 1000 may include generating an image of virtual content based on the pose of the user device with respect to the physical location of the virtual object, at 1008. Continuing with the example of FIG.7, the tablet 710 may generate an image of the virtual content 706 for the user based on the pose of the tablet 710 with respect to the physical location that corresponds to the user’s placement of the virtual holopad. The image presents the virtual content 706 as viewed from that pose. The system may fix the orientation of the virtual content with respect to the virtual holopad 702, and may update the images frequently. Therefore, the virtual content appears to remain stationary, so that the user may move about the room to view the virtual content from different perspectives.

[0068] For user devices that permit stereoscopic viewing, the system may generate a pair of images. Such user devices may display the pair of images concurrently as a stereoscopic image of the virtual content.

[0069] Referring again to FIG. 10, the process 1000 may include displaying an image on a display or optical element of the user device in a predetermined location relative to the physical location of the placed virtual object, at 1010. Continuing the example of FIG. 7, the image may be displayed on the tablet 710 as illustrated in FIG. 7. In other embodiments, the image may be displayed on other user devices such as headsets, smartphones, laptops, and the like. The process 1000 may repeat, providing a seamless viewing experience to the user.

[0070] In some embodiments, users may share their images of the virtual content with other users. For example, while explaining a particular view of the virtual content, a presenter may share the presenter’s view of the virtual content so other users see the same view.

[0071] In some embodiments, each networked system may include, in images provided to its local users, representations of users and/or user devices in other systems. These representations, referred to herein as “virtual proxies,” may take any form. For example, a virtual proxy of a user may represent only the user devices currently employed by that user. However, the virtual proxies may be represented in any way. In some embodiments, movements of the users may be tracked to provide more realistic virtual proxies. For example, the systems may track movement of a user’s head, hand, body, eyes, and the like.

[0072] FIG. 11 is a flowchart for a process 1100 for providing, to a second holopad system user, a virtual proxy of a user in a first holopad system, according to embodiments of the disclosed technology. Either of the first and second holopad systems may be a virtual holopad system or a physical holopad system.

[0073] Referring to FIG. 11 , the process 1100 may include generating a virtual proxy of a user device in the physical holopad system, at 1102. The virtual proxy may include one or more three-dimensional objects. The virtual proxy may represent one or more user devices, the user, a control device being used by the user, or any combination thereof.

[0074] Referring again to FIG. 11 , the process 1100 may include adding the virtual proxy to an image of virtual content for the second holopad system, at 1104. In some embodiments, the virtual proxy is added to the image of virtual content based on the pose of the user device with respect to the holopad in the first holopad system. This use of pose preserves the spatial relationship between the user and the virtual content, so that relationship is replicated with the virtual proxy in the second holopad system.

[0075] In some embodiments, the first holopad system may add the virtual proxy to the images of the virtual content, and provide those images to the second holopad system. Alternatively, the first holopad system may transmit, to the second holopad system, the virtual proxy, or information representing the virtual proxy, along with information describing the pose of the user device with respect to the structure, thereby enabling the second holopad system to generate an image of the virtual content that includes the virtual proxy.

[0076] Finally, the process 1100 may include displaying the image in the second holopad system, at 1106. For example, the image may be displayed on one or more user devices employed by users of the second holopad system to view the virtual content.

[0077] In some embodiments, the disclosed systems may cooperate to present the virtual content in the same geographical orientation at each system. In such embodiments, each system may use compass directions to orient the virtual content. For example, virtual content representing a building may be oriented so the north side of the building faces north. In such embodiments, the physical holopad may include a compass. In some embodiments, the compass may be visible to the users so the users can rotate the holopad to face north. In some embodiments, the compass may include a transmitter to transmit direction information to the devices that generate the images of the virtual content, which may employ that direction information to rotate the virtual content accordingly. In some embodiments, the physical holopad may include a design element that indicates how the holopad should be placed in a room to provide a desired perspective of the virtual content to an audience. For example, the top of the holopad may have an arrow with the word “Audience” to indicate proper orientation.

[0078] These techniques may be employed for the positioning of the virtual proxies as well. For example, when a user of a first holopad is located to the west of the virtual content, the virtual proxy may be presented in the second holopad system to the west of the virtual content. In some case, users in multiple systems may occupy the same position relative to the virtual content. Superimposing virtual proxies of these users could be disconcerting to users. Therefore, in some embodiments, the system may display the virtual proxies such that they don’t appear on top of each other.

[0079] It may be desirable to match the lighting conditions of the virtual content and virtual proxies with the lighting conditions where the virtual content will be shown. Therefore, in some embodiments, the physical holopad and/or user devices may include a camera for detecting ambient lighting conditions. The camera may have a wide field of view, for example having a fisheye lens. The computing system(s) and/or devices generating the images of the virtual content may then adjust the lighting of the virtual content accordingly.

[0080] FIG. 12 depicts a block diagram of an example computer system 1200 in which embodiments described herein may be implemented. The computer system 1200 includes a bus 1202 or other communication mechanism for communicating information, one or more hardware processors 1204 coupled with bus 1202 for processing information. Hardware processor(s) 1204 may be, for example, one or more general purpose microprocessors. [0081] The computer system 1200 also includes a main memory 1206, such as a random access memory (RAM), cache and/or other dynamic storage devices, coupled to bus 1202 for storing information and instructions to be executed by processor 1204. Main memory 1206 also may be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1204. Such instructions, when stored in storage media accessible to processor 1204, render computer system 1200 into a special-purpose machine that is customized to perform the operations specified in the instructions.

[0082] The computer system 1200 further includes a read only memory (ROM) 1208 or other static storage device coupled to bus 1202 for storing static information and instructions for processor 1204. A storage device 1210, such as a magnetic disk, optical disk, or USB thumb drive (Flash drive), etc., is provided and coupled to bus 1202 for storing information and instructions.

[0083] The computer system 1200 may be coupled via bus 1202 to a display 1212, such as a liquid crystal display (LCD) (or touch screen), for displaying information to a computer user. An input device 1214, including alphanumeric and other keys, is coupled to bus 1202 for communicating information and command selections to processor 1204. Another type of user input device is cursor control 1216, such as a mouse, a trackball, or cursor direction keys for communicating direction information and command selections to processor 1204 and for controlling cursor movement on display 1212. In some embodiments, the same direction information and command selections as cursor control may be implemented via receiving touches on a touch screen without a cursor.

[0084] The computing system 1200 may include a user interface module to implement a GUI that may be stored in a mass storage device as executable software codes that are executed by the computing device(s). This and other modules may include, by way of example, components, such as software components, object- oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.

[0085] In general, the word “component,” “engine,” “system,” “database,” data store,” and the like, as used herein, can refer to logic embodied in hardware or firmware, or to a collection of software instructions, possibly having entry and exit points, written in a programming language, such as, for example, Java, C, C++, and Python. A software component may be compiled and linked into an executable program, installed in a dynamic link library, or may be written in an interpreted programming language such as, for example, BASIC, Perl, or Python. It will be appreciated that software components may be callable from other components or from themselves, and/or may be invoked in response to detected events or interrupts. Software components configured for execution on computing devices may be provided on a computer readable medium, such as a compact disc, digital video disc, flash drive, magnetic disc, or any other tangible medium, or as a digital download (and may be originally stored in a compressed or installable format that requires installation, decompression or decryption prior to execution). Such software code may be stored, partially or fully, on a memory device of the executing computing device, for execution by the computing device. Software instructions may be embedded in firmware, such as an EPROM. It will be further appreciated that hardware components may be comprised of connected logic units, such as gates and flip-flops, and/or may be comprised of programmable units, such as programmable gate arrays or processors. [0086] The computer system 1200 may implement the techniques described herein using customized hard-wired logic, one or more ASICs or FPGAs, firmware and/or program logic which in combination with the computer system causes or programs computer system 1200 to be a special-purpose machine. According to one embodiment, the techniques herein are performed by computer system 1200 in response to processor(s) 1204 executing one or more sequences of one or more instructions contained in main memory 1206. Such instructions may be read into main memory 1206 from another storage medium, such as storage device 1210. Execution of the sequences of instructions contained in main memory 1206 causes processor(s) 1204 to perform the process steps described herein. In alternative embodiments, hardwired circuitry may be used in place of or in combination with software instructions.

[0087] The term “non-transitory media,” and similar terms, as used herein refers to any media that store data and/or instructions that cause a machine to operate in a specific fashion. Such non-transitory media may comprise non-volatile media and/or volatile media. Non-volatile media includes, for example, optical or magnetic disks, such as storage device 1210. Volatile media includes dynamic memory, such as main memory 1206. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH- EPROM, NVRAM, any other memory chip or cartridge, and networked versions of the same.

[0088] Non-transitory media is distinct from but may be used in conjunction with transmission media. Transmission media participates in transferring information between non-transitory media. For example, transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise bus 1202. Transmission media can also take the form of acoustic or light waves, such as those generated during radio-wave and infra-red data communications.

[0089] The computer system 1200 also includes a communication interface 1218 coupled to bus 1202. Network interface 1218 provides a two-way data communication coupling to one or more network links that are connected to one or more local networks. For example, communication interface 1218 may be an integrated services digital network (ISDN) card, cable modem, satellite modem, or a modem to provide a data communication connection to a corresponding type of telephone line. As another example, network interface 1218 may be a local area network (LAN) card to provide a data communication connection to a compatible LAN (or a WAN component to communicate with a WAN). Wireless links may also be implemented. In any such implementation, network interface 1218 sends and receives electrical, electromagnetic or optical signals that carry digital data streams representing various types of information.

[0090] A network link typically provides data communication through one or more networks to other data devices. For example, a network link may provide a connection through local network to a host computer or to data equipment operated by an Internet Service Provider (ISP). The ISP in turn provides data communication services through the world wide packet data communication network now commonly referred to as the “Internet.” Local network and Internet both use electrical, electromagnetic or optical signals that carry digital data streams. The signals through the various networks and the signals on network link and through communication interface 1218, which carry the digital data to and from computer system 1200, are example forms of transmission media. [0091] The computer system 1200 can send messages and receive data, including program code, through the network(s), network link and communication interface 1218. In the Internet example, a server might transmit a requested code for an application program through the Internet, the ISP, the local network and the communication interface 1218.

[0092] The received code may be executed by processor 1204 as it is received, and/or stored in storage device 1210, or other non-volatile storage for later execution.

[0093] Each of the processes, methods, and algorithms described in the preceding sections may be embodied in, and fully or partially automated by, code components executed by one or more computer systems or computer processors comprising computer hardware. The one or more computer systems or computer processors may also operate to support performance of the relevant operations in a “cloud computing” environment or as a “software as a service” (SaaS). The processes and algorithms may be implemented partially or wholly in application-specific circuitry. The various features and processes described above may be used independently of one another, or may be combined in various ways. Different combinations and subcombinations are intended to fall within the scope of this disclosure, and certain method or process blocks may be omitted in some implementations. The methods and processes described herein are also not limited to any particular sequence, and the blocks or states relating thereto can be performed in other sequences that are appropriate, or may be performed in parallel, or in some other manner. Blocks or states may be added to or removed from the disclosed example embodiments. The performance of certain of the operations or processes may be distributed among computer systems or computers processors, not only residing within a single machine, but deployed across a number of machines. [0094] As used herein, a circuit might be implemented utilizing any form of hardware, or a combination of hardware and software. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a circuit. In implementation, the various circuits described herein might be implemented as discrete circuits or the functions and features described can be shared in part or in total among one or more circuits. Even though various features or elements of functionality may be individually described or claimed as separate circuits, these features and functionality can be shared among one or more common circuits, and such description shall not require or imply that separate circuits are required to implement such features or functionality. Where a circuit is implemented in whole or in part using software, such software can be implemented to operate with a computing or processing system capable of carrying out the functionality described with respect thereto, such as computer system 1200.

[0095] As used herein, the term “or” may be construed in either an inclusive or exclusive sense. Moreover, the description of resources, operations, or structures in the singular shall not be read to exclude the plural. Conditional language, such as, among others, “can,” “could,” “might,” or “may,” unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps.

[0096] Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. Adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known,” and terms of similar meaning should not be construed as limiting the item described to a given time period on to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technology that may be available or known now or at any time in the future. The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent.