Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR APPLICATIONS OF AUGMENTED REALITY
Document Type and Number:
WIPO Patent Application WO/2019/203952
Kind Code:
A1
Abstract:
Systems and methods are provided that relate to augmented reality applications that may be used in educational settings. For example, such augmented reality applications may include the creation of virtual two- or three- dimensional shapes and/or graphs via the drawing of virtual lines. Other augmented reality applications may include the building of molecules via the importation of virtual atoms and the creation of virtual bonds between the virtual atoms. Created virtual objects, such as shapes and molecules, may be stored in a memory of an augmented reality device or a remote server. Additionally, control of created or imported virtual objects may be selectively transferred between networked augmented reality devices operating in the same network session.

Inventors:
AWAD TARIK EROL JAMES (US)
BUSH LINDA (US)
CHRISTIAN MARK (US)
DI MISCIO FRANCO (US)
SCOTT MATTHEW (US)
TONKS DANIEL (US)
WEBB JOANNA (US)
Application Number:
PCT/US2019/021403
Publication Date:
October 24, 2019
Filing Date:
March 08, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PEARSON EDUCATION INC (US)
International Classes:
G03B19/18; G06T19/20; G06F3/01; G06F3/0481; G06F3/0488
Foreign References:
US20180092706A12018-04-05
US20180095541A12018-04-05
US20160004844A12016-01-07
Attorney, Agent or Firm:
MILHOLLIN, Andrew, C. (US)
Download PDF:
Claims:
CLAIMS

1. An augmented reality device (100) comprising:

a display (130);

a memory (104);

a camera (120) configured to capture video data; and

a processor (102) configured to execute instructions for:

displaying an augmented reality scene on the display, wherein the augmented reality scene comprises a physical scene that is overlaid with a virtual scene, and wherein the display is periodically refreshed to show changes to the virtual scene;

calibrating the augmented reality scene (320);

displaying a menu (600) to a user, the menu comprising a plurality of icons (602, 604, 606, 608, 610, 612, 614, 616, 618, 620, 622), each icon of the plurality of icons, when selected by the user, corresponding to a respectively different action performed by the processor to affect the augmented reality scene (322);

monitoring activity of the user based on first video data captured by the camera, the activity comprising at least an indication of a desired interaction between the user and the menu;

updating the augmented reality scene based on the monitored activity of the user (324); and

saving analytics data on the memory based on the monitored activity of the user (324).

2. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:

generating virtual lines in the augmented reality scene, the virtual lines each having positions and orientations defined by the user (404);

generating a virtual shape having boundaries defined by the virtual lines (406); detecting, based on second video data captured by the camera, a first user gesture corresponding to acceptance of the virtual shape (410);

placing the virtual shape at a user-defined location in the augmented reality scene (414);

detecting, based on third video data captured by the camera, a second user gesture corresponding to a command to scale the size of the virtual shape by a defined amount;

scaling the size of the virtual shape by the defined amount;

detecting, based on fourth video data captured by the camera, a third user gesture corresponding to a command to add a virtual label to the augmented reality scene adjacent to the user-created virtual shape; and

adding the virtual label to the augmented realty scene at adjacent to the user- created virtual shape.

3. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:

detecting, based on second video data captured by the camera, that the user has selected a portion of a virtual shape in the augmented reality scene;

causing the display to highlight the portion of the virtual shape (504) by changing a color in which the portion of the virtual shape is rendered by the display;

automatically determining a dimensional measurement of the portion of the virtual shape (506);

automatically generating a virtual label that depicts the dimensional

measurement (508); and

automatically placing the virtual label in the augmented reality scene (508).

4. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:

adding a first virtual atom to the augmented reality scene (904);

adding a second virtual atom to the augmented reality scene (904);

automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom (916);

automatically generating a virtual bond having the identified bond length; and automatically placing the virtual bond in the augmented reality scene (920), wherein the virtual bond extends from the first virtual atom to the second virtual atom.

5. The augmented reality device of claim 4, wherein the processor is further configured to execute instructions for:

automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length (918).

6. The augmented reality device of claim 4, wherein, after the virtual bond is placed in the augmented reality scene, the augmented reality scene comprises a virtual molecule that includes a plurality of virtual atoms, the plurality of virtual atoms including the first virtual atom, the second virtual atom, and at least one additional virtual atom, and wherein the processor is further configured to execute instructions for:

automatically repositioning the plurality of virtual atoms of the virtual molecule into a molecular orientation based on a quantity of virtual atoms in the virtual molecule (922).

7. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:

adding a plurality of virtual atoms to the augmented reality scene (904);

determining, based on second video data captured by the camera, that the user has selected a first virtual atom of the plurality of virtual atoms (906);

determining, based on third video data captured by the camera, that the user has selected a connect atoms icon of the plurality of icons from the menu (910);

automatically identifying a set of virtual atoms comprising all virtual atoms of the plurality of virtual atoms located within a predetermined range of the first virtual atoms (912);

causing the display to highlight the set of virtual atoms by changing a color in which the display renders the set of virtual atoms;

determining, based on fourth video data captured by the camera, that the user has selected a second virtual atom of the set of virtual atoms (914);

automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom based on a first element of the first virtual atom and a second element of the second virtual atom (916);

automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length (918);

adding a virtual bond to the augmented reality scene to connect the first virtual atom to the second virtual atom, the virtual bond having a length equal to the identified bond length (920); and

automatically repositioning the plurality of virtual atoms based on a number of atoms of a molecule that comprises the first virtual atom and the second virtual atom (922).

8. The augmented reality device of claim 1 , wherein the processor is further configured to execute instructions for:

detecting, based on second video data captured by the camera, a user gesture corresponding to transferring control of a virtual object in the augmented reality scene to a networked augmented reality device(1304);

transferring control of the virtual object from the augmented reality device to the networked augmented reality device (1304);

detecting, based on third video data captured by the camera, a user gesture corresponding to revoking control of the virtual object from the networked augmented reality device (1308); and

transferring control of the virtual object from the networked augmented reality device to the augmented reality device (1308).

9. A method performed by executing computer-readable instructions with a processor (102) of an augmented reality device (100), the method comprising:

displaying an augmented reality scene on a display (130) of the augmented reality device, wherein the augmented reality scene comprises a physical scene that is overlaid with a virtual scene;

periodically refreshing the display to show changes to the virtual scene;

calibrating the augmented reality scene (320);

displaying a menu (600) to a user, the menu comprising a plurality of icons (602, 604, 606, 608, 610, 612, 614, 616, 618, 620, 622), each icon of the plurality of icons, when selected by the user, corresponding to a respectively different action performed by the processor to affect the augmented reality scene (322);

monitoring activity of the user based on first video data captured by a camera of the augmented reality device, the activity comprising at least an indication of a desired interaction between the user and the menu;

updating the augmented reality scene based on the monitored activity of the user (324); and

saving analytics data on a memory (104) of the augmented reality device based on the monitored activity of the user (324).

10. The method of claim 9, further comprising:

generating virtual lines in the augmented reality scene, the virtual lines each having positions and orientations defined by the user (404);

generating a virtual shape having boundaries defined by the virtual lines (406); detecting, based on second video data captured by the camera, a first user gesture corresponding to acceptance of the virtual shape;

placing the virtual shape at a user-defined location in the augmented reality scene (410) ;

detecting, based on third video data captured by the camera, a second user gesture corresponding to a command to scale the size of the virtual shape by a defined amount;

scaling the size of the virtual shape by the defined amount;

detecting, based on fourth video data captured by the camera, a third user gesture corresponding to a command to add a virtual label to the augmented reality scene adjacent to the user-created virtual shape; and

adding the virtual label to the augmented realty scene at adjacent to the user- created virtual shape.

11. The method of claim 9, further comprising:

detecting, based on second video data captured by the camera, that the user has selected a portion of a virtual shape in the augmented reality scene;

causing the display to highlight the portion of the virtual shape (504) by changing a color in which the portion of the virtual shape is rendered by the display;

automatically determining a dimensional measurement of the portion of the virtual shape (506);

automatically generating a virtual label that depicts the dimensional

measurement (508); and

automatically placing the virtual label in the augmented reality scene (508).

12. The method of claim 9, further comprising:

adding a first virtual atom to the augmented reality scene (904);

adding a second virtual atom to the augmented reality scene (904);

automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom (916);

automatically generating a virtual bond having the identified bond length; and automatically placing the virtual bond in the augmented reality scene (920), wherein the virtual bond extends from the first virtual atom to the second virtual atom.

13. The method of claim 12, further comprising:

automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length (918).

14. The method of claim 12, wherein the augmented reality scene comprises a virtual molecule that comprises a plurality of virtual atoms, the plurality of virtual atoms comprising the first virtual atom, the second virtual atom, and a third virtual atom, the method further comprising:

automatically repositioning the plurality of virtual atoms of the virtual molecule into a molecular orientation based on a quantity of virtual atoms in the virtual molecule (922).

15. The method of claim 9, further comprising:

adding a plurality of virtual atoms to the augmented reality scene (904);

determining, based on second video data captured by the camera, that the user has selected a first virtual atom of the plurality of virtual atoms (906);

determining, based on third video data captured by the camera, that the user has selected a connect atoms icon of the plurality of icons from the menu (910);

automatically identifying a set of virtual atoms comprising all virtual atoms of the plurality of virtual atoms located within a predetermined range of the first virtual atoms (912);

causing the display to highlight the set of virtual atoms by changing a color in which the display renders the set of virtual atoms;

determining, based on fourth video data captured by the camera, that the user has selected a second virtual atom of the set of virtual atoms (914);

automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom based on a first element of the first virtual atom and a second element of the second virtual atom (916);

automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length (918);

adding a virtual bond to the augmented reality scene to connect the first virtual atom to the second virtual atom, the virtual bond having a length equal to the identified bond length (920); and

automatically repositioning the plurality of virtual atoms based on a number of atoms of a molecule that comprises the first virtual atom and the second virtual atom (922).

Description:
SYSTEMS AND METHODS FOR APPLICATIONS OF AUGMENTED REALITY

CROSS-REFERENCE TO RELATED APPLICATIONS

[001] This application claims priority to U.S. Provisional Application No. 62/658,929 filed April 17, 2018.

BACKGROUND

[002] Mixed reality or augmented reality display devices, such as head-mounted display devices, may be used in a variety of real-world environments and contexts. Such devices provide a view of a physical environment that is augmented by a virtual environment including virtual images, such as two-dimensional virtual objects and three-dimensional holographic objects, and/or other virtual reality information. Such devices may also include various sensors for collecting data from the surrounding environment.

[003] An augmented reality device may display virtual images that are interspersed with real-world physical objects to create a mixed reality environment. A user of the device may desire to interact with a virtual or physical object using the mixed reality device. However, conventional augmented reality devices may not provide sufficiently interactive and specialized mixed reality environments.

BRIEF DESCRIPTION OF THE DRAWINGS

[004] FIG. 1 illustrates a system level block diagram for an augmented reality device, in accordance with an embodiment.

[005] FIG. 2 illustrates a system level block diagram for a system that includes networked augmented reality devices, in accordance with an embodiment.

[006] FIG. 3 shows an illustrative process flow that may be performed by an augmented reality device to start, progress through, and end an augmented reality application, in accordance with an embodiment. [007] FIG. 4 shows an illustrative process flow for a method of generating a user-created virtual shape in an augmented reality scene based on virtual lines drawn by a user, in accordance with an embodiment.

[008] FIG. 5 shows an illustrative process flow for a method of automatically determining and displaying a dimensional measurement for a selected edge, surface, or three- dimensional shape in an augmented reality scene, in accordance with an embodiment.

[009] FIG. 6A-6D show menus that include selectable icons corresponding to various interactions with an augmented reality scene and virtual objects contained therein, in accordance with an embodiment.

[0010] FIG. 7 shows an illustrative process for a method of changing the atomic element of a selected virtual atom in an augmented reality scene, in accordance with an embodiment.

[0011] FIG. 8 shows an illustrative process flow for a method of adding a virtual atom to an augmented reality scene, connected to a selected virtual atom via a virtual bond to create or modify a virtual molecule, in accordance with an embodiment.

[0012] FIG. 9 shows an illustrative process flow for a method of connecting two virtual atoms in an augmented reality scene to create or modify a virtual molecule, in accordance with an embodiment.

[0013] FIG. 10 shows illustrative examples of different molecular orientations for molecules having different numbers of constituent atoms, in accordance with an embodiment.

[0014] FIG. 1 1 shows illustrative examples of different molecular models that may be used to represent molecules, in accordance with an embodiment.

[0015] FIG.12 shows illustrative examples of different molecular orientations for octahedral molecules having lone pairs in place of atomic bonds, in accordance with an embodiment. [0016] FIG. 13 shows an illustrative process flow for a method of transferring control of a virtual object within an augmented reality scene between two networked augmented reality devices, in accordance with an embodiment.

DETAILED DESCRIPTION

[0017] The present invention will now be discussed in detail with regard to the attached drawing figures that were briefly described above. In the following description, numerous specific details are set forth illustrating the Applicant’s best mode for practicing the invention and enabling one of ordinary skill in the art to make and use the invention. It will be obvious, however, to one skilled in the art that the present invention may be practiced without many of these specific details. In other instances, well-known machines, structures, and method steps have not been described in particular detail in order to avoid unnecessarily obscuring the present invention. Unless otherwise indicated, like parts and method steps are referred to with like reference numerals.

[0018] Augmented reality (AR) devices and applications may be useful in the field of education. In particular, the ability to view and interact with three-dimensional representations of objects such as shapes, graphs, atoms, and molecules may help students to gain a better understanding of the properties of such objects, which may not be readily apparent when the information is presented using conventional two- dimensional methods. For example, chemistry AR applications may provide a student with the ability manipulate virtual atoms to build virtual molecules in an AR scene, which may be more intuitive than traditional ball-and-stick examples. As another example, mathematics AR applications may provide a student with the ability to create three- dimensional shapes and graphs in an AR scene, which may not be achievable using the pen-and-paper representations traditionally used to teach mathematics and, in particular, geometry. In order to facilitate a collaborative educational environment, the sharing of virtual objects (e.g., virtual shapes, virtual molecules) may be selectively enabled in these AR applications via control transfer functions. As will be described, educational AR applications may enable the learning and visualization of various concepts across a wide variety of educational fields, including chemistry and mathematics. [0019] FIG. 1 is a block diagram depicting functional components of an illustrative AR device 100 that may, for example, display a view of a physical scene that is augmented through the integration of a virtual scene that includes virtual images, such as two- dimensional (2D) and three-dimensional (3D) virtual objects. While shown as a block diagram here, the AR device 100 may be a smartphone, tablet device, head mounted display (HMD) device, smart glasses, or any other applicable portable electronic device as may be readily understood by one of ordinary skill. For examples in which the AR device 100 is an HMD device or smart glasses, the AR device 100 may be a holographic computer built into a headset having one or more semitransparent holographic lenses by which virtual objects may be displayed overlapping a physical scene (e.g., via the projection of light onto the lenses) so that a user perceives an augmented reality environment. For examples in which the AR device 100 is a smartphone or a tablet device, the AR device 100 may overlay captured video images of a physical scene with virtual objects to generate an augmented reality scene that may then be presented to a user via an electronic display. The AR device 100 may allow a user to see, hear, and interact with virtual objects displayed within a real-world environment such as a classroom, living room, or office space.

[0020] The AR device 100 may include a processor 102, a memory 104 that includes an AR component 1 10 and an operating system 106, input/output (I/O) devices 108, a network interface 1 12, cameras 120, a display device 130 and an accelerometer 140. Generally, the processor 102 may retrieve and execute programming instructions stored in the memory 104. The processor 102 may be a single CPU, multiple CPUs, a single CPU having multiple processing cores, GPUs having multiple execution paths, or any other applicable processing hardware and arrangement thereof. The memory 104 may represent both random access memory (RAM) and non-volatile memory (e.g., the non- volatile memory of one or more hard disk drives, solid state drives, flash memory, etc.) In some embodiments, the memory 104 may be considered to include memory physically located elsewhere (e.g., a computer in electronic communication with the AR device 100 via network interface 1 12). The operating system 106 included in memory 104 may control the execution of application programs on the AR device 100 (e.g., executed by processor 102). The I/O devices 108 may include a variety of input and output devices, including displays, keyboards, touchscreens, buttons, switches, and any other applicable input or output device. The network interface 1 12 may enable the AR device 100 to connect to and transmit and receive data over an electronic data communications network (e.g., via Ethernet, WiFi, Bluetooth, or any other applicable electronic data communications network).

[0021] The cameras 120 may include an outward facing camera that may detect movements within its field of view, such as gesture-based inputs or other movements performed by a user or by a person or physical object within the field of view. The outward facing camera may also capture two-dimensional image information and depth information from a physical scene and physical objects within the physical scene. For example, the outward facing camera may include a depth camera, a visible light camera, an infrared light camera, and/or a position tracking camera. The cameras 120 may also include one or more user-facing cameras, which may, among other applications, track eye movement of the user. For example, the AR component 1 10 could use such a user- facing camera of cameras 120 to determine which portion of the display device 130, and thereby which portion of the displayed AR scene, that the user is looking at. Generally, the accelerometer 140 is a device capable of measuring the physical (or proper) acceleration of the AR device 100. For example, the AR component 1 10 may use the accelerometer 140 to determine when the position of the AR device 100 is changing in order to track the location of the AR device 100. The cameras 120 may capture video data of the physical scene, such that activity of the user may be monitored and analyzed to identify actions (e.g., gestures and other movements of the user) indicative of desired interactions between the user and virtual objects within the virtual scene of the AR scene shown on the AR device 100.

[0022] Generally, the AR component 1 10 is configured to adjust the depiction of the physical scene viewed on or through the display device 130 via the display of virtual objects of a virtual scene, thereby producing an AR scene. These virtual objects have locations and orientations within the physical scene that are defined based on a calibration that may be performed by the user. The collection of virtual components (e.g., the virtual objects, respective positions and orientations of the virtual objects, and optional state information for the virtual objects) in a given AR scene may be referred to herein as a "virtual scene." For instance, the AR component 1 10 may cause one or more virtual objects of a virtual scene to be displayed at one or more defined (e.g., predefined or defined by a user's action) locations within the physical scene to generate an AR scene. As the AR scene displayed via the display device 130 represents a three-dimensional space, the AR component 1 10 could determine an area of three-dimensional space for a given virtual object to occupy. Once a virtual object has been placed, the virtual object may be, for example, repositioned, rotated, resized (e.g., scaled up or down in size), or otherwise manipulated by a user. A database of virtual objects may be stored in the memory 104 (e.g., as 3D models). When a user inputs a command (e.g., using one or more detectable gestures) to add (e.g., import) a virtual object to a virtual scene, the AR component 1 10 may retrieve the virtual object from this database in the memory 104 and may place the virtual object in the virtual scene. In some embodiments, other information about each virtual object (e.g., color, the present frame of the virtual object in a video or animation sequence, etc.) may be defined in memory. As virtual objects are added or removed from a virtual scene or otherwise changed within the virtual scene, the display 130 may be periodically refreshed to show these changes.

[0023] In some embodiments, the AR component 1 10 may be configured to initialize a single-user session, to host a network session to which other networked AR devices may connect, or to connect the AR device 100 to a network session hosted by another AR device or host server. As used herein, a "network session" refers to an instantiation of a virtual scene (e.g., instantiated at a host AR device or a host server), which may be loaded onto (e.g., streamed to) an AR device connected to the session (e.g., communicatively connected to a host of the session) to generate an AR scene that includes the virtual scene displayed over a physical scene. As used herein, a "single-user session" refers to an instantiation of a virtual scene on a single AR device that is not accessible by other AR devices that may otherwise be in communication with the single AR device over, for example, an electronic data communications network. Network sessions may be accessed by AR device 100 via the network interface 1 12 and an electronic data communications network. As an example, a network session may be established by a separate host AR device or host server and may be joined by the AR device 100 and/or by additional networked AR devices. Alternatively, the AR device 100 may be configured to act as a host and may establish its own network session that is available for other AR devices to join via the electronic data communications network. The virtual scene of a network session may be loaded onto the AR device 100 such that the display 130 shows the same virtual scene as the other networked AR devices that have joined the network session.

[0024] FIG. 2 shows a network architecture 200 which may enable multiple network sessions to be established between networked AR devices and virtual reality (VR) devices. A detailed example of an AR device is described above in connection with AR device 100 of FIG. 1 , and it should be understood that VR devices may operate similarly to AR devices in that VR devices display virtual objects of a virtual scene. Flowever, while virtual objects are displayed over a physical scene with AR devices, VR devices display virtual objects in only a virtual scene without showing the physical environment. It therefore follows that VR devices may save and load virtual objects to and from a virtual scene similarly to how virtual objects are saved and loaded by AR devices, although it may not be necessary to calibrate the virtual scene of a VR device in the same way that the AR scene of an AR device must be calibrated. VR devices may interact with the content management system (CMS) databases described herein for the retrieval and storage of virtual objects in substantially the same way as AR devices, and may join and host network sessions managed by these CMS databases.

[0025] Two network sessions 204 and 208 may be established between AR devices 202- 1 through 202-N and VR devices 206-1 through 206-M, respectively, where N represents the number of networked AR devices in the network session 204 and where M represents the number of networked VR devices in the network session 208. The networked AR and VR devices of each network session may communicate via one or more electronic data communications networks 210, which may include one or more of local area networks (LANs), wide area networks (WANs), the internet, or any other applicable network. For example, the network session 204 may be established by the AR device 202-1 , which may act as a host device in establishing network session 204. While network sessions 204 and 208 respectively include only AR devices and only VR devices, in some embodiments network sessions may be established that include both VR devices and AR devices.

[0026] Each of the network sessions 204 and 208 may be managed via one or more CMS databases 212. CMS databases 212 may, for example, maintain a list of open network sessions which, in the present example, includes the network sessions 204 and 208. The list of open network sessions may further include, for a given network session (e.g., network session 204), identifying information (e.g., IP addresses, MAC addresses, device names, etc.) corresponding to networked AR and/or VR devices connected in the given network session (e.g., AR devices 202). A new network session may be added to the list of open network sessions maintained by the CMS databases 212 in response to a host AR or VR device initiating the new network session. An open network session may be removed from the list of open network sessions maintained by the CMS databases 212 in response to a host AR or VR device for the open network session closing the open network session. The CMS databases 212 may, for example, further include one or more databases of virtual objects that may be retrieved an AR or VR device connected to CMS databases 212 (e.g., using AR devices 202 or 206 connected to CMS databases 212 via networks 210) and loaded into the virtual scene running on the AR or VR device or that may be downloaded to the memory of an AR device for future use in generating virtual scenes.

[0027] FIG. 3 shows an illustrative process flow for a method 300 for executing an application for simulating a clinical scenario with an AR device (e.g., the AR device 100 of FIG. 1 ). For example, the method 300 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. While method 300 is described in connection with an AR device, it should be readily understood that method 300 may instead be performed using a VR device.

[0028] At step 302, an application may be started on the AR device. The application may be, for example, a software application stored in the memory of the AR device, which includes a database of 2D and/or 3D virtual objects (e.g., video sequences for a virtual patient , HUDs showing vital signs, menus, control interfaces, virtual medical diagnostics equipment, etc.) designed for use with the application. In some embodiments, additional virtual objects may be downloaded to the memory of the AR device or otherwise accessed from one or more remote databases communicatively coupled to the AR device via an electronic data communications network.

[0029] At step 304, the AR device may generate and display a prompt to the user (e.g., by executing instructions on a processor of the AR device) in the AR scene shown on the display of the AR device. The prompt allows the user to select from a variety of displayed modes of operation for the AR device. These modes may include starting or joining a multi-user network session and starting a single-user session. The prompt may be shown on a display of the AR device, and the mode may be selected by the user via the user's performance of a detectable gesture. For example, the user may perform an air tap gesture in a location of one of the modes displayed in the prompt in the AR scene shown on the display of the AR device. The gesture may be performed within the field of view of one or more cameras of the AR device so that the gesture may be detected and interpreted by the processor of the AR device.

[0030] At step 306, the method 300 progresses based on which mode was selected in step 304. If the mode of starting a multi-user network session was selected, the method 300 progresses to step 308. If the mode of beginning a single-user session was selected, the method 300 progresses to step 320.

[0031] At step 308, in response to the selection of the mode of starting a multi-user network session, the AR device may generate and display a prompt to the user (e.g., by executing instructions on a processor of the AR device) in the AR scene shown on the display of the AR device. The prompt allows the user to select from the options of joining an existing network session or hosting a network session with the AR device of the user. The prompt may be shown on a display of the AR device, and the option may be selected by the user via the user's performance of a detectable gesture. For example, the user may perform an air tap gesture in a location of one of the "join" and "host" options displayed in the prompt in the AR scene shown on the display of the AR device. The gesture may be performed within the field of view of one or more cameras of the AR device so that the gesture may be detected (e.g., based on video data captured by one or more cameras of the AR device) and interpreted by the processor of the AR device.

[0032] At step 310, the method 300 progresses based on which option was selected in step 308. If the "host" option was selected, the method 300 progresses to step 314. If the "join" option was selected, the method 300 progresses to step 316.

[0033] At step 314, in response to the selection of the "host" option, the user's AR device initiates a network session as the host.

[0034] At step 316, in response to the selection of the "join" option, the AR device may generate and display a list of network sessions that are available to be joined by the AR device of the user. The user may then select a network session from the displayed list.

[0035] At step 318, the AR device of the user joins the selected network session.

[0036] At step 320, the AR scene displayed by the AR device may be calibrated. The locations and orientations of virtual objects in the AR scene will be defined based on this calibration.

[0037] At step 322, the AR device displays one or more menus in the AR scene. These menus provide multiple selectable options by which a user can add virtual objects to the AR scene or may interact with virtual objects already in the AR scene. For example, user interactions may include, but are not limited to, repositioning a virtual object, rotating a virtual object, changing the color of a virtual object, selecting multiple virtual objects, combining multiple selected virtual objects together to create a new virtual object, spatially manipulating one or more portions of a virtual object, highlighting a virtual object, highlighting one or more portions of a virtual object, automatically placing a virtual label or virtual note on or adjacent to a virtual object or a portion of a virtual object, transferring control of a virtual object to the AR device of another user, requesting control of a virtual object from another user, and removing a virtual object from the AR scene. The menus may also provide options that do not directly pertain to virtual object manipulation, such as options for navigating between menus, closing the menu, closing the application, changing settings of the AR device itself, connecting to a network session, and disconnecting from a network session. It should be noted that, in some embodiments, step 322 may be optional, because some interactions with virtual objects may be performed directly in response to the AR device detecting (e.g., based on video data captured by one or more cameras of the AR device) a user gesture (e.g., defined in the memory of the AR device) corresponding to a particular interaction with a virtual object. For example, this user gesture may not require a menu for the particular interaction to be performed.

[0038] At step 324, in response to detected user actions (e.g., detected user gestures and/or detected interactions between the user and virtual objects or menus in the AR scene, which may be detected based on video data captured by one or more cameras of the AR device), the AR scene may be updated by the AR device based on the user actions. Additionally, analytics data may be collected at step 324 and saved to a memory device (e.g., memory 104 of FIG. 1 ) of the augmented reality device(s). In some embodiments, this analytics data may be performed continuously throughout the loop formed by steps 322, 324, and 326. Collected analytics data may, for example, include a log of user actions performed by each individual user in the network session. As another example, the collected analytics data may include a log of the direction in which a user is looking over time during the session (e.g., which, when the user is a student in a classroom setting, may be used as a measure for determining how well the student was paying attention to a lecture, demonstration, exercise, etc.). As another example, the collected analytics data may include information pertaining to how much time elapses between a user performing two or more pre-specified user actions (e.g., the elapsed time between a user adding virtual objects to a AR scene, the user arranging the virtual objects in the AR scene, and the user combining the virtual objects to form a new virtual object). As another example, collected analytics data may include audio data (e.g., collected by recording speech the user of the AR device via a microphone of the AR device). While several examples of the type of analytics data that may be collected during the performance of method 300 have been described, it should be understood that other applicable information may also be collected as part of this analytics data. [0039] At step 326, the AR device determines whether the session has ended, indicating that the application should be closed. For example, the AR device may determine that a given network session has ended in response to the AR device being disconnected from the network session. As another example, the AR device may detect that a session has ended in response to the AR device detecting (e.g., based on video data captured by one or more cameras of the AR device) the performance of a user gesture corresponding to a command to end the session.

[0040] In response to the AR device determining that the session has not ended, the method 300 returns to step 322, thereby creating a loop between steps 322, 324, and 326, where the loop is ended when the session is ended. In response to the AR device determining that the session has ended, the method 300 proceeds to step 328.

[0041] At step 328, the AR device may close the application upon ending the session.

[0042] The method 300 may be used as the foundational framework for the execution of a variety of AR applications by an AR device. Such AR applications may have applicability in the field of education. For example, educational AR applications may include AR applications individually corresponding to the teaching of mathematics, chemistry, history, or other applicable subjects. An illustrative mathematics AR application will now be described.

[0043] In order to more effectively learn the subject of mathematics, it may be helpful for students to be able to visualize 3D models of shapes and graphs. Augmented Reality technology may enable this kind of 3D visualization for students in a classroom environment. A mathematics AR application may be executed by a processor of an AR device (e.g., processor 102 of AR device 100 of FIG. 1 ). The mathematics AR application may include multiple features relevant to the learning and visualization of various mathematical concepts.

[0044] As an example, the mathematics AR application may generate and display a 3D graph that includes a graphical 3D representation of a mathematical equation input by a user. This 3D graph may allow the user to visualize the mathematical equation in a way that would not be possible with traditional 2D graphs. In some embodiments, the 3D graph generated and displayed via the mathematics AR application may, for example, be generated based on one or more mathematical equations which may be input by a user (e.g., using a physical keyboard or a virtual keyboard instantiated in the AR environment displayed by the AR device running the mathematics AR application). In some embodiments, a user can select one or more points or planes within a displayed 3D graph and, upon selection, coordinates corresponding to the position of each point or plane in the 3D graph may be displayed and the selected point or plane may be highlighted. Secondary information, such as the distance between two points, the integral defining the space between two planes, the surface area or volume of some or all of the graphical representation of the mathematical equation, or any other secondary information attainable regarding selected points and/or planes of a graphical 3D representation of a mathematical equation, or regarding the representation itself. The AR device may accurately establish vectors and vector points on a 3D graph (e.g., in a defined coordinate space, which may include Cartesian, polar, or spherical coordinate space) displayed by the AR device in the AR scene. The AR device may automatically calculate and display the magnitude, angle, and direction of a given vector or the magnitude of a given vector point in a 3D graph displayed by the AR device in the AR scene. A user may input coordinates (e.g., via a graphical user interface displayed by the AR device) and, in response, the AR device may draw a virtual line in a displayed 3D graph from a vertex of the 3D graph to a point in the 3D graph corresponding to the input coordinates. In some embodiments, the AR device may add (e.g., in response to a detected user command/gesture) one or more labels to such 2D or 3D graphs (e.g., along lines, points, surfaces, angles, vectors, etc.), such labels containing information defined by the user.

[0045] As another example, the mathematics AR application may allow a user to import and place a 3D shape into the AR scene using the AR device. The AR device may be operated in a "Nets Mode" (e.g., in response to the user selecting the "Nets Mode" from a menu displayed by the AR device) in which the user may select a face of the 3D shape, and the shape may "unfold" the selected face. The user may select the unfolded face to reverse this action, restoring the 3D shape to its original form. The user may instead select additional faces of the 3D shape to continue unfolding these faces. Once all faces of the 3D shape have been unfolded, a 2D "net" will remain, a "net" here referring to a 2D representation of a 3D shape which may be folded into the form of that 3D shape.

[0046] As another example, the mathematics AR application may allow a user to import a 2D net into the AR scene. Here, a 2D net refers to a segmented 2D shape having multiple segments separated by joints, which, when correctly folded along its joints, creates a 3D shape. A user may manipulate the 2D net (e.g., by folding the 2D net along its joints) to form a corresponding 3D shape. Once a user correctly folds the 2D net into a 3D shape, for example, a virtual label may be automatically generated and displayed, the virtual label including the proper name for the 3D shape. For example, a user may fold a 2D net into a 3D shape having four surfaces connected at respective corners at a single point at one end and connected at respective edges at a fifth, square surface at the other end. Once the 2D net has been correctly folded by the user, a virtual label may be automatically displayed in the AR scene containing the term "Square-based Pyramid."

[0047] As another example, the mathematics AR application may allow a user to draw multiple lines in an AR scene displayed on an AR device of the user. The drawn lines may then be joined together (e.g., in response to a user gesture/command, or automatically in response to the AR device detecting based on video data captured by one or more cameras of the AR device that two or more drawn lines have intersected) to create a 3D or 2D shape in the AR scene. FIG. 4 shows an illustrative process flow for a method 400 for creating a 2D or 3D shape based on lines drawn by a user in an AR scene with an AR device (e.g., the AR device 100 of FIG. 1 ). For example, the method 400 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 400 may, for example, be performed during steps 322-326 of method 300 of FIG. 3. While method 400 is described in connection with an AR device, it should be readily understood that method 400 may instead be performed using a VR device. [0048] At step 402 the method 400 starts. For example, the method 400 may start in response to the selection of a "3D-shape creation" mode by a user from a menu displayed by the AR device.

[0049] At step 404, virtual lines may be generated in the AR scene in response to the virtual lines being drawn by the user. For example, in order to draw a new virtual line, the user may select a "new line" option from a menu displayed by the AR device. Virtual lines may be removed from the AR scene in response to the user selecting a virtual line and selecting a "delete line" option from a menu displayed by the AR device. It should be noted that, in some embodiments, the generation of virtual lines may be enabled automatically by the AR device, without the need for user confirmation via a menu.

[0050] At step 406, the virtual lines drawn by the user may be automatically joined by the AR device and these virtual lines, along with a space enclosed thereby, may define a new user-created virtual shape. In other words, the virtual lines may define the boundaries of the new user-created virtual shape that is generated by the AR device. For example, in order to join the virtual lines to create a new virtual shape, the user may select a "shape complete" option from a menu displayed on the AR. In some embodiments, opaque surfaces may be automatically generated and displayed for the new user-created virtual shape, which may allow a user to verify that the joined virtual lines have created a virtual shape that is acceptable to the user. In some embodiments, the user may be provided with options (e.g., via a menu displayed by the AR device) to rotate, extrude, stretch, trim, fillet, extend, or perform other actions on the user-created virtual shape, which may allow the user the ability to further customize the user-created virtual shape.

[0051] At step 408, the AR device may generate and display a prompt providing the user with the options to "accept" the user-created virtual shape, or "discard" the user-created virtual shape.

[0052] At step 410, the AR device detects (e.g., based on video data captured by one or more cameras of the AR device) whether the user selected the "accept" option or the "discard" option at step 408. If the user chose the "discard" option, method 400 proceeds to step 412. If the user chose the "accept" option, method 400 proceeds to step 414. [0053] At step 412, in response to the user choosing to discard the user-created virtual shape, the AR device removes the user-created virtual shape from the AR scene and, optionally, from the memory of the AR device.

[0054] At step 414, in response to the user choosing to accept the user-created virtual shape, the AR device allows the user to place the user-created virtual shape at a location in the AR scene displayed by the AR device. In some embodiments, AR device may cause a prompt or menu to be displayed to the user, providing the user with the option to scale the user-created virtual shape up or down in size (e.g., by the user performing one or more gestures interpreted by the AR device as corresponding to a command to scale the user-created virtual shape up in size or down in size) by a defined amount after the user- created virtual shape has been placed. In response to a detected gesture of the user, the user-created virtual shape may then be scaled up or down in size by the defined amount by the AR device. In some embodiments, the AR device may cause a prompt or menu to be displayed to the user, providing the user with the option to add a virtual label containing user-defined text to the AR scene adjacent to the user-created virtual shape after the user-created virtual shape has been placed. In response to a detected gesture of the user, the AR device may cause the virtual label to be added to the AR scene adjacent to the user-created virtual shape.

[0055] As another example, the mathematics AR application may allow a user to select and highlight some or all portions of a 3D or 2D shape in an AR scene and may subsequently generate and display a corresponding measurement of the highlighted portions. FIG. 5 shows an illustrative process flow for a method 500 for selecting, highlighting, and measuring portions of a 2D or 3D shape in an AR scene with an AR device (e.g., the AR device 100 of FIG. 1 ). For example, the method 500 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 500 may, for example, be performed during steps 322-326 of method 300 of FIG. 3. While method 500 is described in connection with an AR device, it should be readily understood that method 500 may instead be performed using a VR device. [0056] At step 502, the method 500 starts. For example, the method 500 may start in response to the selection of a "measurement" mode by a user from a menu displayed by the AR device. In some embodiments, different "measurement" modes may be selectable from the menu, such as a "length measurement" mode in which the length of a selected line or edge of a virtual shape in the AR scene may be calculated and displayed by the AR device, an "angle measurement" mode in which the angle of a selected vertex (e.g., meeting point of two lines that form an angle) of a virtual shape in the AR scene may be calculated and displayed by the display of the AR device, a "surface area measurement" mode in which the surface area of a selected surface of a virtual shape in the AR scene may be calculated and displayed by the AR device, and a "volume measurement" mode in which the volume of a selected 3D virtual shape in the AR scene may be calculated and displayed by the AR device.

[0057] At step 504, the AR device may highlight an edge or surface of a 3D or 2D shape, or the entirety of a 3D shape selected by a user (e.g., by causing the display to render the edge, surface, or shape in a different color than the color previously displayed, such as a comparatively brighter color). As an example, the AR device may highlight a given edge, vertex, surface, or shape in response to detecting (e.g., based on video data captured by one or more cameras of the AR device) that a user has performed a user gesture corresponding to the selection of that edge, vertex, surface, or shape (e.g., while the user is operating the AR device in one of the "measurement" modes described above). The user may provide confirmation that the highlighted edge, vertex, surface, or shape is what the user intends for the AR device to measure (e.g., via interaction with a prompt generated and displayed by the AR device). In some alternate embodiments, the AR device may automatically highlight an edge, vertex, surface, or shape that is determined to overlap with a reticle displayed in the center of the display of the AR device (e.g., corresponding to where the user is looking).

[0058] At step 506, the AR device may determine a length, angle, surface area, or volume of the highlighted edge, surface or 3D shape. For example, the length, angle, surface area, or volume may be calculated by the AR device for the edge, vertex, surface, or 3D shape relative to the scale at which the edge, vertex, surface, or 3D shape is displayed in the AR scene. In other words, for a given edge of a virtual shape displayed in the AR environment, the length of the edge calculated by the AR device should correspond (within a reasonable tolerance) to the length of the edge that would be determined by a user by measuring the edge with a physical ruler. If the scale of the virtual shape is increased or decreased, the value of the length measurement that would be determined by the AR device would increase and decrease in a corresponding fashion.

[0059] At step 508, the AR device may generate and display the length, angle, surface area, or volume determined in step 506 in the AR scene. As an example, the AR device may automatically create a virtual label (e.g., as a virtual object) that includes the determined length, surface area, or volume. This virtual label may be automatically placed in the AR scene, by the AR device) on or adjacent to the corresponding edge, vertex, surface, or 3D object. Alternatively, the user may have the option to place the created virtual label at a user-selected location. In this way, a user may be made aware of the determined length, angle, surface area, or volume via the created virtual label. In some embodiments, the user may create custom labels, which the AR device may add to the AR scene (e.g., at selected surfaces, edges, corners, of 2D or 3D shapes in the AR scene).

[0060] As described previously, educational AR applications may not be limited to the teaching of mathematics, but may be applicable to a variety of other subjects. An illustrative chemistry AR application will now be described.

[0061] In order to more effectively learn the subject of chemistry, it may be helpful for students to be able to visualize 3D models of atoms and molecules. AR technology may enable this kind of 3D visualization for students in a classroom environment. A chemistry AR application may be executed by a processor of an AR device (e.g., the processor 102 of the AR device 100 of FIG. 1 ). The chemistry AR application may include multiple features relevant to the learning and visualization of various chemistry-related concepts.

[0062] As an example, the chemistry AR application may include one or more menus that may be accessed by the user of an AR device (e.g., in response to a detected, predefined user gesture, which may be detected based on video data captured by one or more cameras of the AR device). These menus provide the user with tools and functions that may allow the user to build a virtual molecule in an AR scene using an AR device, and that may enable the modification of the atoms and bonds being used to build the virtual molecule. Examples of menus that may be accessed by a user of the chemistry AR application are shown in FIGS. 6A-6D. It should be noted that, in some embodiments, the menus of FIGS. 6A-6D may instead be displayed in a chemistry VR application executed by a VR device.

[0063] FIG. 6A shows a main menu 600 through which a user may access various functions of the chemistry AR application (e.g., by selecting any of the icons 602-622 displayed as part of the main menu 600), which may allow the user to build a virtual molecule in an AR scene depicted by an AR device (e.g., the AR device 100 shown in FIG. 1 ) operated by the user. In some embodiments, the main menu 600 may be displayed by the AR device in response to the user selecting a virtual atom in the AR scene.

[0064] The icon 602 corresponds to a "connect atoms" function. For example, a user may select a first virtual atom in the AR scene in response to which the AR device may automatically display the main menu 600. The user may then select the icon 602 and may subsequently select a second virtual atom in the AR scene to which the first atom will be connected in order to have the AR device automatically generate and display a virtual bond connecting the first and second virtual atoms. In some embodiments, when the AR device generates the virtual bond between the first and second virtual atoms, the AR device may display a secondary menu 640, shown in FIG. 6C. The secondary menu 640 may include icons 642-650 and may allow a user to select a bond type for the virtual bond generated between the two virtual atoms. In some embodiments, the secondary menu 640 may be shown automatically in response to a user selecting a virtual bond that is already present in the AR scene, so that the user may alter the bond type of that virtual bond. The icon 644, when selected, changes the bond type of the virtual bond to a single bond. The icon 646, when selected, changes the bond type of the virtual bond to a double bond. The icon 648, when selected, changes the bond type of the virtual bond to a triple bond. The icon 650, when selected, changes the bond type of the virtual bond to a hybrid bond. In some embodiments, additional icons may be presented in the secondary menu 640. As an example, an icon (not shown) may be included in the secondary menu 640 that, when selected, toggles the bond type of the virtual bond between an covalent bond (e.g., represented as one or more solid lines) and a ionic bond (e.g., represented as one or more pairs of virtual electrons on one atom and a positive charge of corresponding magnitude on the other atom). As another example, an icon (not shown) may be included in the menu 640 that toggles the display of p-orbitals and pi-bonds for molecules containing double or triple bonds. The icon 642, when selected, closes the secondary menu 640.

[0065] Returning to the main menu 600 of FIG. 6A, the icon 604 corresponds to an "adjust angle" function. For example, a user may select a first virtual atom and, in response, the AR device may display the main menu 600. The user may then select the icon 604 and may subsequently select or otherwise identify two or more virtual atoms which are serially attached to the first virtual atom selected. The user may then perform gestures to increase, decrease, or rotate the bond angle or torsion angle relating the selected group of atoms.

[0066] The icon 606 corresponds to a "lone pair" function. For example, a user may select a virtual atom (e.g., of a virtual molecule) and, in response, the AR device may display the main menu 600. The user may then select icon 606 in order to add a lone pair to the virtual atom. Flere, a "lone pair" refers to a visual representation (e.g., model) of a pair of valence electrons of the virtual atom that are not shared with another atom. In some embodiments, adding this lone pair to the virtual atom may cause the AR device to automatically rearrange the structure/geometry of the virtual molecule, as will be described in more detail in connection with FIG. 1 1 .

[0067] The icon 608 corresponds to a "delete" function. For example, a user may select a virtual object (e.g., a virtual atom, a virtual bond, or an entire virtual molecule) and, in response, the AR device may display the main menu 600. The user may then select the icon 608 in order to delete the selected virtual object, thereby removing the selected virtual object form the AR scene. In some embodiments, when a virtual atom is deleted and that virtual atom was part of a virtual molecule, any virtual bonds connected to that virtual atom may also be automatically deleted by the AR device, regardless of whether those bonds were selected by the user.

[0068] The icon 610 corresponds to an "add atom" function. For example, a user may select a first virtual atom and, in response, the AR device may display the main menu 600. The user may then select the icon 610 and, in response the AR device may display a secondary menu 660, shown in FIG. 6D. The secondary menu 660 may include a variety of elements to select from in icons 664, and may also include an option to select an element from a periodic table of elements by selecting the icon 662. In some embodiments, the different elements represented in the icons 664 may be color coded, with different colors corresponding to different elements. In some embodiments, each of the icons 664 may include a label corresponding to element represented by that icon. The user may select a type (e.g., element) of virtual atom to add to the AR scene from the icons 664 of the menu 660, or from the periodic table displayed in response the user's selection of the icon 662. In response to the user's selection of an element, a second virtual atom corresponding to the selected element may be added to the AR scene by the AR device and a bond may automatically be added to the AR scene connecting the first virtual atom to the second virtual atom.

[0069] In an alternate embodiment (e.g., in which the user desires to add a virtual atom to the AR scene without connecting that virtual atom to another virtual atom), the user may perform a gesture to bring up the menu 600, may select the icon 610 and, in response, menu 660 shown in FIG. 6D may be displayed by the AR device. The user may select a type (e.g., element) of virtual atom to add to the AR scene from the icons 664 of the secondary menu 660, or from the periodic table displayed in response the user's selection of the icon 662. In response to the user's selection of an element, a virtual atom corresponding to the selected element is added to the AR scene at a user-selected location by the AR device.

[0070] The icon 612 corresponds to a "change atom" function. For example, a user may select a virtual atom that is already present in the AR scene displayed by the AR device and, in response, the AR device may display the menu 600. The user may then select the icon 612 and, in response, the AR device may display the secondary menu 660 from which the user may select an element to replace the element of the selected virtual atom. In response to the user's selection of the new element the selected virtual atom may be replaced with a new virtual element corresponding to the selected element. The new virtual atom may retain any virtual bonds previously connected to the originally selected virtual atom.

[0071] The icon 614, when selected, closes the main menu 600.

[0072] The icon 616 corresponds to a "highlight cursor" mode, which, when selected, causes the AR device to highlight any virtual atom or virtual bond that is overlapped by a cursor (e.g., reticle) shown on the display of the AR device. This "highlight cursor" mode may be used, for example, when a teacher or presenter wants to draw the attention of students or other users to particular virtual atoms or virtual bonds.

[0073] The icon 618 corresponds to a "laser gaze" mode, which, when selected, causes a "laser" object to be added to the AR scene that extends in a straight line from a front surface of the AR device. The purpose of this laser object may be to allow other users of other AR devices operating in the same network session as the user of the AR device to be able to determine the direction in which the user of the AR device is looking, as the position and orientation of the laser object will correspond to this direction.

[0074] The icon 620, corresponds to a "molecular model selection" function which, when selected, causes the secondary menu 630 shown in FIG. 6B to be displayed by the AR device. The secondary menu 630 includes icons 632-638. The icon 634, when selected, causes virtual molecules in the AR scene to be displayed in a standard "stick-and-ball" model (e.g., in which virtual atom radii are depicted based on attenuated scaled values). The icon 636, when selected, causes virtual molecules in the AR scene to be displayed in a "covalent model" (e.g., where virtual atom radii are depicted according to their approximate covalent radii) to give a better sense of the relative size of the virtual atoms. The icon 638, when selected, causes virtual molecules in the AR scene to be displayed in "space-filling" model (e.g., in which atom radii are depicted based on van der Waals radii) where virtual atoms are shown as solid spheres (sometimes overlapping) to suggest the space occupied by the virtual atoms. In some embodiments, additional icons (not shown) may be included in the secondary menu 630 which, when selected, may cause virtual molecules in the AR scene to be displayed in another applicable model, such as a "line" model, a "stick" model, or a "wireframe" model. The icon 632, when selected, closes the secondary menu 630. Examples of some of the molecular models that may be displayed in this way are shown and described in connection with FIG. 10, below.

[0075] The icon 622, when selected, resets the AR scene, removing all virtual atoms, bonds, and molecules present in the AR scene. In some embodiments, a single "starter" virtual atom may remain in the AR scene when this reset function is carried out. In some embodiments, selection of the icon 622 by the user may only result in the removal of those virtual atoms, bonds, and molecules in the AR scene that are "owned" by the user of the AR device (e.g., that were added to the AR scene by the user or that were transferred to the user of the AR device by the user of another AR device via a transfer of ownership).

[0076] As an example, the chemistry AR application may be used to change the atomic element corresponding to a selected virtual atom in the AR scene displayed by an AR device executing the chemistry AR application. FIG. 7 shows an illustrative process flow for a method 700 for changing the atomic element of a virtual atom in an AR scene with an AR device (e.g., the AR device 100 shown in FIG. 1 ). For example, the method 700 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 700 may, for example, be performed during steps 322-326 of method 300 of FIG. 3. While method 700 is described in connection with an AR device, it should be readily understood that method 700 may instead be performed using a VR device.

[0077] At step 702, the method 700 starts. In some embodiments, the method 700 may begin at any point following the calibration of the AR scene. [0078] At step 704, the AR device may place a virtual atom in the AR scene. In some embodiments, the AR device may automatically prompt the user to place a virtual atom in the AR scene at the beginning of the session, immediately following calibration of the AR scene. In another embodiment, the AR device may automatically place the virtual atom at a predefined location in the AR scene, rather than prompting the user to place the virtual atom. In another embodiment, the placement of the virtual atom may be performed as part of the loading of an AR scene or in response to a user command (e.g., via interaction with the main menu 600 of FIG. 6A).

[0079] At step 706, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of the virtual atom by the user.

[0080] At step 708, in response to detecting (e.g., based on video data captured by one or more cameras of the AR device) the selection of the virtual atom by the user, the AR device may automatically display a first menu (e.g., menu 600 of FIG. 6A) that includes selectable icons (e.g., icons 602-620 of FIG. 6A) corresponding to various functions that may be performed by the AR device.

[0081] At step 710, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of an icon (e.g., icon 612 of FIG. 6A) from the first menu by the user, the icon corresponding to a "change atom" function, which allows the user to change the atomic element presently corresponding to the virtual atom to a new atomic element.

[0082] At step 712, the AR device may display a second menu (e.g., menu 660 of FIG. 6D) that includes multiple atomic elements from which the user may select. In some embodiments, the user may bring up a periodic table via interaction with the second menu, and an atomic element may be selected from this periodic table.

[0083] At step 714, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of an atomic element by the user.

[0084] At step 716, the AR device may automatically update the characteristics of the virtual atom to correspond to the selected atomic element while simultaneously altering the bond lengths, angles, and orientations of any virtual bonds connected to the first virtual atom. In some embodiments, the geometric arrangement of any molecule of which the virtual atom may be a part may be modified according to the updated characteristics of the virtual atom.

[0085] As an example, the chemistry AR application may be used to "build" a virtual molecule in an AR scene displayed by an AR device executing the chemistry AR application by connecting virtual atoms added to the AR scene together via virtual bonds. FIG. 8 shows an illustrative process flow for a method 800 for building a virtual molecule in an AR scene with an AR device (e.g., the AR device 100 shown in FIG. 1 ). For example, the method 800 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 800 may, for example, be performed during steps 322- 326 of method 300 of FIG. 3. While method 800 is described in connection with an AR device, it should be readily understood that method 800 may instead be performed using a VR device.

[0086] At step 802, the method 800 starts. For example, the method 800 may begin at any point following the calibration of the AR scene, subsequent to the placement of one or more virtual atoms in the AR scene.

[0087] At step 804, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of a first virtual atom by a user.

[0088] At step 806, in response to detecting (e.g., based on video data captured by one or more cameras of the AR device) the selection of the first virtual atom by the user, the AR device may automatically display a first menu (e.g., menu 600 of FIG. 6A) that includes selectable icons (e.g., icons 602-620 of FIG. 6A) corresponding to various functions that may be performed by the AR device.

[0089] At step 808, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of an icon (e.g., icon 610 of FIG. 6A) from the first menu by the user, the icon corresponding to a "add atom" function, which allows the user to add a new (second) virtual atom to the first virtual atom (e.g., via a virtual bond) in order to create a new virtual molecule or to modify an existing virtual molecule.

[0090] At step 810, the AR device may display a second menu (e.g., menu 660 of FIG. 6D) that includes multiple atomic elements from which the user may select. In some embodiments, the user may bring up a periodic table via interaction with the second menu, and an atomic element may be selected from this periodic table.

[0091] At step 812, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of an atomic element by the user.

[0092] At step 814, the AR device may automatically identify a bond length needed to connect the first virtual atom to a second virtual atom corresponding to the selected atomic element, to be added to the AR scene in a subsequent step. In some embodiments, the identified bond length may be identified based on an approximation of a length corresponding to that of a real-world bond between first and second real-world atoms, where the atomic elements of the first and second real-world elements correspond to the atomic elements of the first virtual atom and the second virtual atom. The approximation of the bond length may correspond to the sum of the average empirical covalent radii of the atomic element of the first virtual atom and that of the second virtual atom. While in some embodiments, the identification of bond length by the AR device may automatically correspond to the length of a single bond, in other embodiments, the user may be provided with the opportunity to select a bond type (e.g., single, double, triple, or hybrid bond) before the bond length is identified by the AR device. In such examples, for bond types other than single bonds, the bond length may be approximated by scaling of the single bond length calculated from the empirical average of the covalent radii of the atomic elements of the first and second virtual atoms.

[0093] In another embodiment, the identified bond length may be based solely on the distance between the first virtual atom and the second virtual atom in the AR scene. In other embodiments, the identified bond length may be determined by the AR device based on the actual bond length of a real-world bond corresponding to the virtual bond being formed. For example, the identified bond-length may be a scaled representation of the real-world bond length. Table 1 shows an example of various bond types and their corresponding bond lengths.

Table 1

[0094] In the present example, the AR device may access a look-up table (LUT) in the memory of the AR device to identify the appropriate bond length for a virtual bond of a given type between the first virtual atom and the second virtual atom.

[0095] At step 816, the AR device may add the second virtual atom to the AR scene. The second virtual atom may correspond to the selected atomic element and may be a sphere having one or more defined radii specific to that atomic element, where the radius of the second virtual atom may be further defined based on the "molecular model" display mode in which the AR device is operating. The second virtual atom may be added to the AR scene at a position having a distance from the first virtual atom that is equal to the identified bond length.

[0096] At step 818, the AR device may connect the first virtual atom to the second virtual atom with a virtual bond to create a virtual molecule or to modify an existing virtual molecule. For example, the AR device may add the virtual bond to the AR scene, with the length of the virtual bond being set by the AR device to the identified bond length. In some instances, the first virtual atom may be part of a pre-existing virtual molecule. The user may be presented with the option to choose a bond-type (e.g., single bond, double bond, triple bond, hybrid bond via secondary menu 640 of FIG. 6C) for the virtual bond before or at the time the virtual bond is generated. In some embodiments, the number of virtual bonds that may be added to the first virtual atom may be limited by the number of free electrons that are available in real-world atoms of the element represented by the first virtual atom. In some embodiments, the angle and orientation of the second virtual atom and the virtual bond with respect to the virtual molecule may be automatically determined and set by the AR device according to an appropriate molecular geometry for that virtual molecule. It should be noted that steps 816 and 818 may be performed effectively simultaneously by the AR device in some embodiments.

[0097] At step 820, the AR device, after adding the virtual bond to the AR scene, may optionally, automatically reposition virtual atoms and any virtual bonds of the newly created or modified virtual molecule into an orientation corresponding to the total number of virtual atoms in that virtual molecule.

[0098] It should be understood that method 800 may be performed repeatedly as a user builds a virtual molecule.

[0099] As an example, the chemistry AR application may be used to "connect" virtual atoms present in an AR scene displayed by an AR device executing the chemistry AR application via virtual bonds. FIG. 9 shows an illustrative process flow for a method 900 for building a virtual molecule in an AR scene with an AR device (e.g., the AR device 100 shown in FIG. 1 ). For example, the method 900 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 900 may, for example, be performed during steps 322-326 of method 300 of FIG. 3. While method 900 is described in connection with an AR device, it should be readily understood that method 900 may instead be performed using a VR device. [00100] At step 902, the method 900 starts. For example, the method 900 may begin at any point following the calibration of the AR scene.

[00101] At step 904, the AR device may place two or more virtual atoms in the AR scene. In an embodiment, the placement of the virtual atoms may be performed as part of the loading of an AR scene or in response to a user command (e.g., via interaction with the main menu 600 of FIG. 6A). In another embodiment, the placement of the virtual atoms may occur over the course of the user's interaction with the AR scene via the AR device (e.g., as the AR device adds virtual atoms and builds molecules according to gestures and commands of the user).

[00102] At step 906, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of a first virtual atom by a user.

[00103] At step 908, in response to detecting (e.g., based on video data captured by one or more cameras of the AR device) the selection of the first virtual atom by the user, the AR device may automatically display a first menu (e.g., menu 600 of FIG. 6A) that includes selectable icons (e.g., icons 602-620 of FIG. 6A) corresponding to various functions that may be performed by the AR device.

[00104] At step 910, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of an icon (e.g., icon 602 of FIG. 6A) from the first menu by the user, the icon corresponding to a "connect atoms" function, which allows the user to add a virtual bond between the first virtual atom and another (second) virtual atom in the AR scene. In this way, two atoms, an atom and a virtual molecule, or two virtual molecules may be connected via the virtual bond.

[00105] At step 912, the AR device may make all virtual atoms located within a predefined range of the first virtual atom more visible by highlighting any virtual atoms located within the predetermined range and/or by "dimming" any virtual atoms located outside of the predetermined range. For example, virtual atoms may be highlighted by causing the display of the AR device to render the virtual atoms or the outlines of the virtual atoms in a different color or brightness (e.g., bright yellow) than that which was previously shown on the display. Put another way, highlighting may refer to increasing the brightness and/or adding a colored border to corresponding atoms in the AR scene shown on the display. For example, virtual atoms may be dimmed by causing the display of the AR device to render the virtual atoms in a different color or brightness (e.g., dim gray) than that which was previously shown on the display. In some embodiments, dimming the virtual atoms may include reducing the opacity of the virtual atoms shown on the display Put another way, dimming may refer to changing the color of and/or reducing the opacity and/or brightness of the corresponding virtual atoms in the AR scene.

[00106] At step 914, the AR device may detect (e.g., based on video data captured by one or more cameras of the AR device) the selection of a second virtual atom located within the predetermined range by the user. The user may, for example, be prevented from selecting virtual atoms that are located outside the predetermined range between steps 912 and 914.

[00107] At step 916, the AR device may automatically identify a bond length needed to connect the first virtual atom to the selected second virtual atom. In some embodiments, the identified bond length may be identified based on an approximation of a length corresponding to that of a real-world bond between first and second real-world atoms, where the atomic elements of the first and second real-world elements correspond to the atomic elements of the first virtual atom and the second virtual atom. The approximation of the bond length may correspond to the sum of the average empirical covalent radii of the atomic element of the first virtual atom and that of the second virtual atom. While in some embodiments, the identification of bond length by the AR device may automatically correspond to the length of a single bond, in other embodiments, the user may be provided with the opportunity to select a bond type (e.g., single, double, triple, or hybrid bond) before the bond length is identified by the AR device. In such examples, for bond types other than single bonds, the bond length may be approximated by scaling of the single bond length calculated from the empirical average of the covalent radii of the atomic elements of the first and second virtual atoms. [00108] In another embodiment, the identified bond length may be based solely on the distance between the first virtual atom and the second virtual atom in the AR scene. In other embodiments, the identified bond length may be determined by the AR device based on the actual bond length of a real-world bond corresponding to the virtual bond being formed. In the present example, the AR device may access a look-up table (LUT) (e.g., corresponding to Table 1 ) in the memory of the AR device to identify the appropriate bond length for a virtual bond of a given type between the first virtual atom and the second virtual atom.

[00109] At step 918, the AR device may, optionally, automatically reposition the first virtual atom and/or the second virtual atom such that a distance between the first virtual atom and the second virtual atom (e.g., as measured from center-to-center) equals the identified bond-length (e.g., identified in step 916). In some embodiments, if either of the first or second virtual atoms are part of an existing virtual molecule, all virtual bonds and virtual atoms of that virtual molecule may be moved when the corresponding one of the first and second virtual atoms is moved at step 918. In this way, virtual molecule-to- molecule connections may be made.

[00110] At step 920, the AR device may connect the first virtual atom to the second virtual atom with a virtual bond to create a virtual molecule or to modify an existing one or more virtual molecules. For example, the AR device may add the virtual bond to the AR scene, with the length of the virtual bond being set by the AR device to the identified bond length. The user may be presented with the option to choose a bond-type (e.g., single bond, double bond, triple bond, hybrid bond via secondary menu 640 of FIG. 6C) for the virtual bond before or at the time the virtual bond is generated. In some embodiments, the number of virtual bonds that may be added to the first virtual atom may be limited by the number of free electrons that are available in real-world atoms of the element represented by the first virtual atom. In some embodiments, the angle and orientation of the second virtual atom and the virtual bond with respect to the virtual molecule may be automatically determined and set by the AR device according to an appropriate molecular geometry for that virtual molecule. [00111] At step 922, the AR device, after adding the virtual bond to the AR scene, may optionally, automatically reposition virtual atoms and any virtual bonds of the newly created or modified virtual molecule into an orientation corresponding to the total number of virtual atoms in that virtual molecule. It should be noted that steps 918 and 922 may be performed effectively simultaneously by the AR device in some embodiments.

[00112] Examples of various molecular orientations (e.g., geometries) that may be used at least in connection with step 716 of FIG. 7, step 820 of FIG. 8 and step 922 of FIG. 9 are shown in FIG. 10.

[00113] Molecule 1002 corresponds to a linear molecular orientation in which a central atom is bonded to only two other atoms. The bond angle for the bonds of the molecule 1002 is 180 degrees.

[00114] Molecule 1004 corresponds to a trigonal planar molecular orientation in which a central atom is bonded to only three other atoms. The bond angle for the bonds of the molecule 1004 is 120 degrees.

[00115] Molecule 1006 corresponds to a tetrahedral molecular orientation in which a central atom is bonded to only four other atoms. The bond angle for the bonds of the molecule 1006 is 109.5 degrees.

[00116] Molecule 1008 corresponds to a trigonal bipyramidal molecular orientation in which a central atom is bonded to only five other atoms. The bond angles for the bonds of the molecule 1008 are 180 degrees, 120 degrees, and 90 degrees.

[00117] Molecule 1010 corresponds to an octahedral molecular orientation in which a central atom is bonded to only six other atoms. The bond angles for the bonds of the molecule 1010 are 90 and 180 degrees.

[00118] FIG. 1 1 shows two different molecular models that may be used to depict molecules in the AR scene displayed by the AR device. For example, the molecular model shown may be selected via menu 630 of FIG. 6. Molecular model 1 102 represents a "stick-and-ball" model where virtual atoms are shown as being connected by virtual bonds, with both the virtual atoms and the virtual bonds being visible. In contrast, molecular model 1 104 represents a "space-filling" model in which virtual atoms are shown as overlapping spheres in order to depict the virtual atoms of the virtual molecule as taking up the space that the virtual atoms would physically occupy in the real-world (e.g., with consideration for scale). Bonds connecting atoms are generally not visible in the "space filling" model.

[00119] FIG. 12 shows an example of how the depiction of a given virtual molecule may change when virtual lone pairs of electrons are included along with the virtual atoms and corresponding virtual bonds attached to a given virtual atom of a virtual molecule (e.g., in response to a user of an AR device selecting icon 606 of menu 600 and selecting a virtual atom to be replaced with a lone pair or to add a lone pair to a virtual atom). The example shown corresponds to an original virtual molecule 1200, initially having octahedral molecular orientation with no lone pairs. Virtual molecule 1202 shows the updated depiction and molecular orientation (square pyramidal) that would be displayed by the AR device when a single virtual atom of the virtual molecule 1200 is replaced with a lone pair. Virtual molecule 1204 shows the updated depiction and molecular orientation (square planar) that would be displayed by the AR device when two opposing virtual atoms of the virtual molecule 1200 are replaced with two lone pairs. Virtual molecule 1206 shows the updated depiction and molecular orientation (T-shaped) that would be displayed by the AR device when three virtual atoms of the virtual molecule 1200 are replaced with three lone pairs. Virtual molecule 1208 shows the updated depiction and molecular orientation (Linear) that would be displayed by the AR device when four virtual atoms of the virtual molecule 1200 are replaced with four lone pairs.

[00120] In some embodiments, the AR device may provide the user may with the option to engage an empirically-based optimization (e.g., semi-empirical quantum chemistry algorithms, which, for example, may be based on neglect of diatomic differential overlap (NDDO) approximation) which may slightly alter the bond lengths and angles in the molecule to more accurately approximate a structure corresponding to the most energetically stable arrangement for the molecule. [00121] In some applications of the present invention (e.g., some educational applications), it may be beneficial to assign control of a virtual object to only the AR device of the user that created or imported that virtual object to the network session. This may prevent users from interfering with the virtual objects of other users, regardless of whether such interference is performed intentionally or unintentionally. However, in some embodiments, it may be desirable for control of a virtual object to be transferred from one user to another. For example, during a network session for a chemistry AR application, a teacher may import a partially completed virtual molecule into the AR scene of the network session, and may ask a student to complete the virtual molecule. The teacher may then transfer control of the partially completed virtual molecule to the student so that the student can modify the partially completed virtual molecule to create a completed virtual molecule (e.g., via the method 700 of FIG. 7, the method 800 of FIG. 8, and/or the method 900 of FIG. 9).

[00122] FIG. 13 shows an illustrative process flow for a method 1300 for transferring control of a virtual object in an AR scene from a first AR device (e.g., the AR device 100 shown in FIG. 1 ) to a second AR device that is in the same network session as the first AR device. For example, the method 1300 may be performed, at least in part, by executing, with a processor (e.g., processor 102 shown in FIG. 1 ), instructions (e.g., part of the augmented reality component 1 10 shown in FIG. 1 ) stored in memory (e.g., the memory 104 shown in FIG. 1 ) of the AR device. The method 1300 may, for example, be performed during steps 322-326 of method 300 of FIG. 3, and may be implemented as a feature of either of the mathematics AR application and the chemistry AR application described previously. While method 1300 is described in connection with an AR device, it should be readily understood that method 1300 may instead be performed using a VR device.

[00123] At step 1302, the method 1300 starts. For example, the method 1300 may start in response to the selection of a "control transfer" function by a user from a menu displayed by the first AR device. [00124] At step 1304, the first AR device may transfer control of a virtual object in an AR scene to the second AR device. The virtual object, for example, may have been added to the AR scene by the first AR device (e.g., in response to commands from a user of the first AR device), and ownership/control of the virtual object may have automatically been assigned to the first AR device upon the addition of the virtual object to the AR scene. When transferring control of the virtual object, the user of the first AR device may select the second AR device from a list of AR devices that are connected to the network session, or may select (e.g., via a user gesture) the second AR device itself, as depicted in the AR scene. It is in response to the user's selection of the second AR device that the first AR device transfers control of the virtual object to the second AR device. In some embodiments, instead of transferring control of the virtual object to the second AR device, the first AR device may assign permission to the second AR device to control the virtual object, while the first AR device simultaneously retains the ability to control and modify the virtual object.

[00125] At step 1306, the second AR device may optionally modify the virtual object upon receiving control of the virtual object from the first AR device.

[00126] At step 1308, the first AR device may revoke control of the virtual object form the second AR device. For example, the user of the first AR device may select the virtual object and may be prompted with the option to revoke control of the virtual object from the second AR device, which the user may then select. Thus, while the first AR device may have transferred control of the virtual object in step 1302, this control may only be temporary, and ownership of the virtual object may be maintained by the first AR device.

[00127] As described previously, the functionality to transfer control of a virtual object from one AR device to another may be beneficial in certain applications in which users may desire to work together to modify a single virtual object in an AR scene.

[00128] In an example embodiment, an augmented reality device comprising a display, a memory, a camera, and a processor. The camera may be configured to capture video data. The processor may be configured to execute instructions for displaying an augmented reality scene on the display, the augmented reality scene including a physical scene that is overlaid with a virtual scene, calibrating the augmented reality scene, displaying a menu to a user, the menu comprising a plurality of icons, each icon of the plurality of icons, when selected by the user, corresponding to a respectively different action performed by the processor to affect the augmented reality scene, monitoring activity of the user based on first video data captured by the camera, the activity comprising at least an indication of a desired interaction between the user and the menu, updating the augmented reality scene based on the monitored activity of the user, and saving analytics data to the memory based on the monitored activity of the user. The display may be periodically refreshed to show changes to the virtual scene.

[00129] In some embodiments, the processor may be further configured to execute instructions for generating virtual lines in the augmented reality scene, the virtual lines each having positions and orientations defined by the user, generating a virtual shape having boundaries defined by the virtual lines, detecting, based on second video data captured by the camera, a first user gesture corresponding to acceptance of the virtual shape, placing the virtual shape at a user-defined location in the augmented reality scene, detecting, based on third video data captured by the camera, a second user gesture corresponding to a command to scale the size of the virtual shape by a defined amount, scaling the size of the virtual shape by the defined amount, detecting, based on fourth video data captured by the camera, a third user gesture corresponding to a command to add a virtual label to the augmented reality scene adjacent to the user-created virtual shape, and adding the virtual label to the augmented realty scene at adjacent to the user- created virtual shape.

[00130] In some embodiments, the processor may be further configured to execute instructions for detecting, based on second video data captured by the camera, that the user has selected a portion of a virtual shape in the augmented reality scene, causing the display to highlight the portion of the virtual shape by changing a color in which the portion of the virtual shape is rendered by the display, automatically determining a dimensional measurement of the portion of the virtual shape, automatically generating a virtual label that depicts the dimensional measurement, and automatically placing the virtual label in the augmented reality scene. [00131] In some embodiments, the processor may be further configured to execute instructions for adding a first virtual atom to the augmented reality scene, adding a second virtual atom to the augmented reality scene, automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom, automatically generating a virtual bond having the identified bond length, and automatically placing the virtual bond in the augmented reality scene. The virtual bond may extend from the first virtual atom to the second virtual atom.

[00132] In some embodiments, the processor may be further configured to execute instructions for automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length.

[00133] In some embodiments, after the virtual bond is placed in the augmented reality scene, the augmented reality scene may include a virtual molecule that includes a plurality of virtual atoms including the first virtual atom, the second virtual atom, and at least one additional virtual atom. The processor may be further configured to execute instructions for automatically repositioning the plurality of virtual atoms of the virtual molecule into a molecular orientation based on a quantity of virtual atoms in the virtual molecule.

[00134] In some embodiments, the processor may be further configured to execute instructions for adding a plurality of virtual atoms to the augmented reality scene, determining, based on second video data captured by the camera, that the user has selected a first virtual atom of the plurality of virtual atoms, determining, based on third video data captured by the camera, that the user has selected a connect atoms icon of the plurality of icons from the menu, automatically identifying a set of virtual atoms comprising all virtual atoms of the plurality of virtual atoms located within a predetermined range of the first virtual atoms, causing the display to highlight the set of virtual atoms by changing a color in which the display renders the set of virtual atoms, determining, based on fourth video data captured by the camera, that the user has selected a second virtual atom of the set of virtual atoms, automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom based on a first element of the first virtual atom and a second element of the second virtual atom, automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length, adding a virtual bond to the augmented reality scene to connect the first virtual atom to the second virtual atom, the virtual bond having a length equal to the identified bond length, and automatically repositioning the plurality of virtual atoms based on a number of atoms of a molecule that comprises the first virtual atom and the second virtual atom.

[00135] In some embodiments, the processor is further configured to execute instructions for detecting, based on second video data captured by the camera, a user gesture corresponding to transferring control of a virtual object in the augmented reality scene to a networked augmented reality device, transferring control of the virtual object from the augmented reality device to the networked augmented reality device, detecting, based on third video data captured by the camera, a user gesture corresponding to revoking control of the virtual object from the networked augmented reality device, and transferring control of the virtual object from the networked augmented reality device to the augmented reality device.

[00136] In an example embodiment, a method may be performed by executing computer- readable instructions with a processor of an augmented reality device. The method may include steps of displaying an augmented reality scene on a display of the augmented reality device, the augmented reality scene including a physical scene that is overlaid with a virtual scene, periodically refreshing the display to show updates to the virtual scene, calibrating the augmented reality scene, displaying a menu to a user, the menu comprising a plurality of icons, each icon of the plurality of icons, when selected by the user, corresponding to a respectively different action performed by the processor to affect the augmented reality scene, monitoring activity of the user based on first video data captured by a camera of the augmented reality device, the activity comprising at least an indication of a desired interaction between the user and the menu, updating the augmented reality scene based on the monitored activity of the user, and saving analytics data on a memory of the augmented reality device based on the monitored activity of the user.

In some embodiments, the method may further include steps of generating virtual lines in the augmented reality scene, the virtual lines each having positions and orientations defined by the user, generating a virtual shape having boundaries defined by the virtual lines, detecting, based on second video data captured by the camera, a first user gesture corresponding to acceptance of the virtual shape, placing the virtual shape at a user-defined location in the augmented reality scene, detecting, based on third video data captured by the camera, a second user gesture corresponding to a command to scale the size of the virtual shape by a defined amount, scaling the size of the virtual shape by the defined amount, detecting, based on fourth video data captured by the camera, a third user gesture corresponding to a command to add a virtual label to the augmented reality scene adjacent to the user-created virtual shape, and adding the virtual label to the augmented realty scene at adjacent to the user-created virtual shape.

[00137] In some embodiments, the method may further include steps of detecting, based on second video data captured by the camera, that the user has selected a portion of a virtual shape in the augmented reality scene, causing the display to highlight the portion of the virtual shape by changing a color in which the portion of the virtual shape is rendered by the display, automatically determining a dimensional measurement of the portion of the virtual shape, automatically generating a virtual label that depicts the dimensional measurement, and automatically placing the virtual label in the augmented reality scene.

[00138] In some embodiments, the method may further include steps of adding a first virtual atom to the augmented reality scene, adding a second virtual atom to the augmented reality scene, automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom, automatically generating a virtual bond having the identified bond length, and automatically placing the virtual bond in the augmented reality scene, wherein the virtual bond extends from the first virtual atom to the second virtual atom. [00139] In some embodiments, the method may further include steps of automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length.

[00140] In some embodiments, the augmented reality scene may include a virtual molecule that comprises a plurality of virtual atoms, the plurality of virtual atoms comprising the first virtual atom, the second virtual atom, and a third virtual atom. The method may further include automatically repositioning the plurality of virtual atoms of the virtual molecule into a molecular orientation based on a quantity of virtual atoms in the virtual molecule.

[00141] In some embodiments, the method may further include steps of adding a plurality of virtual atoms to the augmented reality scene, determining, based on second video data captured by the camera, that the user has selected a first virtual atom of the plurality of virtual atoms, determining, based on third video data captured by the camera, that the user has selected a connect atoms icon of the plurality of icons from the menu, automatically identifying a set of virtual atoms comprising all virtual atoms of the plurality of virtual atoms located within a predetermined range of the first virtual atoms, causing the display to highlight the set of virtual atoms by changing a color in which the display renders the set of virtual atoms, determining, based on fourth video data captured by the camera, that the user has selected a second virtual atom of the set of virtual atoms, automatically identifying a bond length needed to connect the first virtual atom to the second virtual atom based on a first element of the first virtual atom and a second element of the second virtual atom, automatically repositioning the first virtual atom and the second virtual atom such that a distance between the first virtual atom and the second virtual atom equals the identified bond length, adding a virtual bond to the augmented reality scene to connect the first virtual atom to the second virtual atom, the virtual bond having a length equal to the identified bond length, and automatically repositioning the plurality of virtual atoms based on a number of atoms of a molecule that comprises the first virtual atom and the second virtual atom. [00142] Other embodiments and uses of the above inventions will be apparent to those having ordinary skill in the art upon consideration of the specification and practice of the invention disclosed herein. The specification and examples given should be considered exemplary only, and it is contemplated that the appended claims will cover any other such embodiments or modifications as fall within the true scope of the invention.