Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR PROVIDING A REAL TIME INTERACTIVE AUGMENTED REALITY (AR) INFOTAINMENT SYSTEM
Document Type and Number:
WIPO Patent Application WO/2021/230824
Kind Code:
A1
Abstract:
A method for providing a real-time interactive augmented reality (AR) infotainment system has a set of AR service processes, a set of AR execution processes, an AR environment, and a user account managed by a remote server. The service processes are subprocesses that enable the user to perform specific processes such as navigation, summoning a digital assistant, or purchasing items. The execution processes enable the user to enter specific commands within the AR environment. The AR environment includes a secondary coordinate space that is a digital record of all the features and content intended to be rendered by a user's computing device. the user is prompted to select a desired service process with the user computing device. Raw environmental data is continuously gathered with the user computing device and a primary coordinate space is generated. The secondary coordinate space is mapped onto the primary space and presented to the user.

Inventors:
LIM KEAN LEE (SG)
BEH EE LING (SG)
Application Number:
PCT/SG2021/050268
Publication Date:
November 18, 2021
Filing Date:
May 17, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BUZZ ARVR PTE LTD (SG)
International Classes:
G06Q50/10; G06T19/00; H04W4/02
Foreign References:
KR20120100433A2012-09-12
KR101603227B12016-03-15
US20170193705A12017-07-06
US20180188033A12018-07-05
US20150046296A12015-02-12
Attorney, Agent or Firm:
NG, Kim Tean (SG)
Download PDF:
Claims:
What is claimed is:

1. A method for providing a real-time interactive augmented reality (AR) infotainment system comprising:

(A) providing a plurality of AR service processes and a plurality of AR execution processes managed by at least one server;

(B) providing at least one AR environment managed by the remote server, wherein the AR environment includes a secondary coordinate space;

(C) providing at least one user account managed by the remote server, wherein the user account is associated to at least one user computing device;

(D) prompting to select at least one desired AR service process with the user computing device;

(E) gathering raw environmental data with the user computing device, wherein the raw environmental data includes visual data and geospatial data;

(F) generating a primary coordinate space with the remote server, wherein the primary coordinate space includes a plurality of primary multidimensional coordinates, and wherein each primary multidimensional coordinate is representative of the visual data associated to at least one geospatial coordinate;

(G) generating an AR coordinate space by mapping the secondary coordinate space onto the primary coordinate space with the remote server;

(H) outputting AR environment with user computing device;

(I) dynamically correlating the AR environment to new raw environmental data with the remote server; and

(J) prompting to select a desired AR execution process with the user computing device.

2. The method of claim 1 comprising: providing the secondary coordinate space includes a plurality of secondary multidimensional coordinates, wherein each secondary multidimensional coordinate is representative of AR content data associated to at least one virtual spatial coordinate; and mapping each secondary multidimensional coordinate to a corresponding primary multidimensional coordinate with the remote server, wherein the corresponding primary multidimensional coordinate is from the plurality of multidimensional coordinates.

3. The method of claim 2 comprising: providing the AR environment includes a plurality of static environmental features, and a plurality of dynamic environmental features, wherein each of the plurality of static environmental features is associated to at least one static multidimensional coordinate and each of the plurality of dynamic environmental features is associated to at least one dynamic multidimensional coordinate; and rendering each of the plurality of static environmental features at a corresponding static AR coordinate and each of the plurality of dynamic environmental features at a corresponding dynamic AR coordinate with the remote server.

4. The method of claim 2 comprising: providing a plurality of vendor profiles managed by the remote server, wherein each vendor profile includes at least one geofenced area, a store layout, and a plurality of inventory items, and wherein each of the plurality of inventory items is a dynamic environmental feature; identifying at least one relevant vendor profile with the remote server by comparing a current position of the user to the geofenced area of each of the plurality of vendor profiles with the remote server, wherein the relevant vendor profile is from the plurality of vendor profiles; structuring the secondary coordinate space around the store layout with the remote server; assigning each of the plurality of inventory items to at least one corresponding dynamic coordinate with the remote server, wherein the corresponding dynamic coordinate is from the plurality of dynamic multidimensional coordinates; scanning an external environment with the user computing device to determine if a real-world product is located in a relevant multidimensional coordinate, wherein the relevant multidimensional coordinate is from the plurality of primary multidimensional coordinates; integrating each of the plurality of inventory items into the AR coordinate space at a corresponding dynamic AR coordinate if a real-world product is not located in the relevant multidimensional coordinate; and updating the AR environment according to the AR coordinate space with the remote server.

5. The method of claim 2 comprising: providing at least one virtual billboard managed by the remote server, wherein the virtual billboard includes a backboard, at least one information display portion, and at least one user input portion; superimposing the information display portion onto the backboard with the remote server; superimposing the user input portion onto the backboard with the remote server; positioning the user input portion offset from the information display portion across the backboard, with the remote server; integrating informational content into the information display portion with the remote server; integrating interactive content into the user input portion with the remote server; and integrating the virtual billboard into the secondary coordinate space as a dynamic environmental feature with the remote server.

6. The method of claim 1 comprising: continuously capturing new environmental data with the user computing device; incorporating the new environmental data into the primary coordinate space with the remote server; updating the AR environment by mapping the secondary coordinate space onto the primary coordinate space with the remote server; and outputting the AR environment with user computing device.

7. The method of claim 1 comprising: wherein the desired AR service process is a navigation process; prompting to enter at least one desired destination with the user computing device; generating a route from a current location to the desired destination with the remote server; integrating an AR overlay of the route into the AR environment with the remote server; and navigating to the desired destination with the user computing device.

8. The method of claim 1 comprising: wherein the desired AR service process is a digital assistant process; providing at least one digital assistant managed by the remote server, wherein the digital assistant is associated to a plurality of assistant commands; integrating the digital assistant into the AR environment with the remote server; and prompting to enter at least one desired assistant command with the user computing device, wherein the desired assistant command is from the plurality of assistant commands.

Description:
Method for Providing a Real Time Interactive Augmented Reality (AR) Infotainment

System

The current application is a Patent Cooperation Treaty (PCT) application and claims a priority to the U.S. Provisional Patent application serial number 63/025,376 filed on May 15, 2020.

FIELD OF THE INVENTION

The present invention relates generally to augmented reality. More specifically, the present invention relates to an AR system that provides a user with content-rich navigation, novel spatial experiences while inside a building or structure.

BACKGROUND OF THE INVENTION

When it comes to navigating and finding one location or another, today’s technology employs a global positioning system (GPS) that provides navigation without worries of proceeding in the wrong direction. GPSs are known for navigating outdoors from one location to another but there does not exist an indoor GPS within shopping malls. Shopping malls are well organized and rely on stationary billboards to assist navigating through the building. The billboards within malls allow the user to access the layout and location of every store and restaurant within the shopping mall. The problem with the billboard within these shopping malls, is that they require the shoppers to find their own way through the mall; along with spending a few extra seconds to locate their desired store and/or restaurant. Not only do the shoppers need to tediously find their location and the store’s location, but the shoppers are also required to remember the direction in which they are heading as the shoppers are unable to keep the billboards by their sides during the walk from current location to the store and/or restaurant. An objective of the present invention is to navigate the users from their current location to their desired location within an indoor space such as a shopping mall. The present invention provides augmented reality based visual assistance that guides a user along the correct path to reach the desired location. Additionally, the present invention provides augmented reality billboards to attract and entice the users to be more tempted to enter the advertised store or restaurant. Furthermore, the present invention is used to increases attraction and business for participating stores and restaurants. To facilitate this, captured images of a physical scene within a building can be saved for reviewing and virtual exploring.. This invention will allow users to provide AR interactions, feedback in real-time to businesses and the data is managed by a back end server to provide real-time user centric AR experience.

More specifically, when a user shops physically, the user can experience AR content using their mobile device, by overlaying virtual three-dimensional (3D) content onto a physical scene depicted on the smart device of the user. This overlay is enabled through the use of real time tracking when the user navigates and shops within the building.

In another use case, when a user is at home, the user can experience AR contents by pressing a button on a web-browser, the user will enter a three hundred sixty (360) degree virtual reality mode of a building, displaying a 360 view of the building images. The said virtual reality images does not mirror their home. An additionally feature of the present invention is to offload any power intensive processing tasks to the remote server so a user can experience the AR content virtually anytime, anywhere in accordance with the user’s preference.

The present invention is a software application that is installed on a smart device. The present invention comprises an AR Wayfinder and an AR billboard.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is a system diagram showing communication between the components of the present invention.

FIG. 2 is a flowchart illustrating the overall method of the present invention.

FIG. 3 is a flowchart illustrating the subprocess for mapping each secondary multidimensional coordinate to a corresponding primary multidimensional coordinate with the present invention. FIG. 4 is a flowchart illustrating the subprocess for rendering environmental features with the present invention.

FIG. 5 is a flowchart illustrating the subprocess for generating and stocking a virtual retail space with the present invention.

FIG. 6 is a flowchart illustrating the subprocess for integrating the virtual billboard into the secondary coordinate space with the present invention.

FIG. 7 is a flowchart illustrating the subprocess for real-time monitoring and updating the AR environment with the present invention.

FIG. 8 is a flowchart illustrating the subprocess for navigating to a desired destination with the present invention.

FIG. 9 is a flowchart illustrating the subprocess for controlling a virtual assistant with the present invention.

FIG. 10 is a block diagram of a computing device for implementing the methods disclosed herein, in accordance with some embodiments.

FIG. 11 illustrates a navigation route overlaid onto the real world using the computing device, in accordance with some embodiments.

FIG. 12 illustrates a navigation route and the digital assistant overlaid onto the real world using the computing device, in accordance with some embodiments.

FIG. 13 illustrates various forms of AR content that can be overlaid onto the real world using the computing device, in accordance with some embodiments.

FIG. 14 illustrates a virtual billboard overlaid onto the real world using the computing device, in accordance with some embodiments.

DETAIL DESCRIPTIONS OF THE INVENTION All illustrations of the drawings are for the purpose of describing selected versions of the present invention and are not intended to limit the scope of the present invention.

As a preliminary matter, it will readily be understood by one having ordinary skill in the relevant art that the present disclosure has broad utility and application. As should be understood, any embodiment may incorporate only one or a plurality of the above-disclosed aspects of the disclosure and may further incorporate only one or a plurality of the above-disclosed features. Furthermore, any embodiment discussed and identified as being “preferred” is considered to be part of a best mode contemplated for carrying out the embodiments of the present disclosure. Other embodiments also may be discussed for additional illustrative purposes in providing a full and enabling disclosure. Moreover, many embodiments, such as adaptations, variations, modifications, and equivalent arrangements, will be implicitly disclosed by the embodiments described herein and fall within the scope of the present disclosure.

Accordingly, while embodiments are described herein in detail in relation to one or more embodiments, it is to be understood that this disclosure is illustrative and exemplary of the present disclosure, and are made merely for the purposes of providing a full and enabling disclosure. The detailed disclosure herein of one or more embodiments is not intended, nor is to be construed, to limit the scope of patent protection afforded in any claim of a patent issuing here from, which scope is to be defined by the claims and the equivalents thereof. It is not intended that the scope of patent protection be defined by reading into any claim a limitation found herein that does not explicitly appear in the claim itself.

Thus, for example, any sequence(s) and/or temporal order of steps of various processes or methods that are described herein are illustrative and not restrictive. Accordingly, it should be understood that, although steps of various processes or methods may be shown and described as being in a sequence or temporal order, the steps of any such processes or methods are not limited to being carried out in any particular sequence or order, absent an indication otherwise. Indeed, the steps in such processes or methods generally may be carried out in various different sequences and orders while still falling within the scope of the present disclosure. Accordingly, it is intended that the scope of patent protection is to be defined by the issued claim(s) rather than the description set forth herein.

Additionally, it is important to note that each term used herein refers to that which an ordinary artisan would understand such term to mean based on the contextual use of such term herein. To the extent that the meaning of a term used herein — as understood by the ordinary artisan based on the contextual use of such term — differs in any way from any particular dictionary definition of such term, it is intended that the meaning of the term as understood by the ordinary artisan should prevail.

Furthermore, it is important to note that, as used herein, “a” and “an” each generally denotes “at least one,” but does not exclude a plurality unless the contextual use dictates otherwise. When used herein to join a list of items, “or” denotes “at least one of the items,” but does not exclude a plurality of items of the list. Finally, when used herein to join a list of items, “and” denotes “all of the items of the list.”

The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar elements. While many embodiments of the disclosure may be described, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the methods described herein may be modified by substituting, reordering, or adding stages to the disclosed methods. Accordingly, the following detailed description does not limit the disclosure. Instead, the proper scope of the disclosure is defined by the appended claims. The present disclosure contains headers. It should be understood that these headers are used as references and are not to be construed as limiting upon the subjected matter disclosed under the header.

Other technical advantages may become readily apparent to one of ordinary skill in the art after review of the following figures and description. It should be understood at the outset that, although exemplary embodiments are illustrated in the figures and described below, the principles of the present disclosure may be implemented using any number of techniques, whether currently known or not. The present disclosure should in no way be limited to the exemplary implementations and techniques illustrated in the drawings and described below.

Unless otherwise indicated, the drawings are intended to be read together with the specification, and are to be considered a portion of the entire written description of this invention. As used in the following description, the terms “horizontal”, “vertical”, “left”, “right”, “up”, “down” and the like, as well as adjectival and adverbial derivatives thereof (e.g., “horizontally”, “rightwardly”, “upwardly”, “radially”, etc.), simply refer to the orientation of the illustrated structure as the particular drawing figure faces the reader. Similarly, the terms “inwardly,” “outwardly” and “radially” generally refer to the orientation of a surface relative to its axis of elongation, or axis of rotation, as appropriate. As used herein, the term “dorsal” refers to positions that are located near, on, or towards the upper or top side of a structure.

Referring to FIG. 1 through FIG. 14, the present invention is a method for providing an interactive augmented reality (AR) infotainment system. The method of the present invention is designed to enable a user to overlay interactive AR content onto the real world by using a computing device. The term “computing device” is used herein to refer to any electronic system capable of executing the method of the present invention and communicating with external devices. In some embodiments, the user employs computing devices selected from the group including, but not limited to, smart phones, smart glasses, tablet computers, and laptop computers. To achieve this functionality, the present invention makes use of an offloaded computing strategy that makes use of a user computing device as a front-end interface for user input and information gathering, and at least one remote server as the primary processing component. Thus, the present invention is able to provide a lightweight AR suite that can be integrated into web browsers and various other applications without requiring a significant amount of local processing power to be present in the user computing device. The term “remote server” is used herein to refer to a computing device capable of executing all background processes required to perform the method of the present invention. Additionally, the remote server is designed to coordinate the communication between a plurality of user computing devices and a plurality of vendor accounts. This integration with vendor accounts enables the present invention to function as an infotainment system where the user is able to view advertisements, explore virtual worlds, and navigate through brick-and-mortar retail spaces. To interact with the AR content, the user simply points the camera’s lens at a desired area of the external environment, and the appropriate AR content will be overlaid onto the image of the real world being output by the display device of the user computing device. The user is then able to interact with the AR content using a plurality of commands including, but not limited to, moving physical objects into the space occupied by the AR content, scanning AR objects with the camera, or issuing verbal commands. For example, a matrix barcode may be presented to the user in the AR overlay. When the user scans the barcode with the camera, they are directed toward a website that sells an associated product. It is an aim of the present invention to provide a system that replaces physical billboards and signage with virtual representations that can be updated remotely or programed to display customized content to the user.

Preferably, the user computing comprises at least one camera, a positioning system, a processing unit, a wireless communication radio, and a display device. The method of the present invention directs the user to scan the external environment with the camera to gather sufficient data to construct the AR overlay. Data gathered by the user computing device is sent to the remote server for processing and rendering of the AR overlay. Once the processing is completed the relevant sections of the AR overlay are sent to the user computing device. Thereby, limiting the local processing power required.

Referring to FIG. 1 and FIG. 2, to achieve the above-described functionalities, the system used to execute the overall method of the present invention provides a plurality of AR service processes and a plurality of AR execution processes that are managed by the remote server (Step A). Each of the plurality of AR service processes is a service or task that the user can employ the method of the present invention to perform. For example, the user may direct the method of the present invention to generate a digital assistant as a three-dimensional (3D) AR object. The digital assistant is then overlaid onto an image of the real world that is presented on the user computing device. Each of the plurality of AR execution processes is a command, gesture, or interactive process that the user performs to direct the method of the present invention to execute a desired subprocess or procedure. For example, the user may issue an AR execution process when they reach out to pick up a virtual item in a store. The appropriate AR execution process may be to track the user’s hand movements and dynamically render the virtual item in the user’s hand to mimic how a real-world item would appear to the user. In a separate example, the user may direct the method of the present invention to open a menu by performing a specific gesture within view of the camera’s lens.

The overall method of the present invention continues by providing at least one AR environment managed by the remote server (Step B). The AR environment is a virtual model that is designed to be overlaid onto the real world. Accordingly, the AR environment includes a secondary coordinate space. The secondary coordinate space is a virtual representation of all relevant information that is included in the AR environment. Specifically, the secondary coordinate space is used to define the augogram that is used to create the AR environment which is overlaid onto the real world around the user. The overall method of the present invention continues by providing at least one user account that is associated to at least one user computing device (Step C). The user account is a virtual representation of the user and includes user preferences, as well as user identification data. The overall method of the present invention continues by prompting the user to select at least one desired AR service process with the user computing device (Step D). Thus, enabling the user to choose the service they want to use the method of the present invention to perform. For example, the user may select a relocation process as the AR service process. This will initiate a sequence that enables the user to scan any arbitrary real-world environment and then have an AR environment overlaid onto the scan. Thus, enabling the user to turn any location into a rich augmented reality space.

The overall method of the present invention continues by gathering raw environmental data with the user computing device (Step E). As described above, the method of the present invention employs the user computing device as a front-end interface for gathering raw environmental data and relaying data between the user and the remote server. Accordingly, the raw environmental data includes visual data that is captured by the camera and geospatial data that is captured by the positioning system. The present invention makes use of GPS data to track a user’s location in real time. However, some embodiments of the present invention make use of motion sensors including, but not limited to accelerometers (A), magnetometers (M), gyroscopes (G), and fiducial markers (FM) to actively determine the user’s position. The overall method of the present invention continues by generating a primary coordinate space with the remote server (Step F). The present invention augments the GPS data through Inertial Interactive Unit (IIU) where the above-mentioned four sensors are used to determine HU-based navigation. An Inertial Measurement Unit (IMU), may be used for determination of device orientation in a global coordinate system. Preferably the present invention does not rely on IMU but instead uses a plurality of other data points including, but not limited to, position relative to WIFI routers, position relative to cellphone towers, and the position relative to other user computing devices. Embodiments of the present invention works directly with user’s mobile phones to send signals, data back to the remote server so the infotainment AR system can feed user data in real time based on IIU data. IIU readings are transformed from the local coordinate system to user’s devices. As user interacts with infotainment AR contents, the detected interactions are summed up to get the user’s track in the building, touchpoints on devices and it is sent back to back end system in real time. Similar to the secondary coordinate space, the primary coordinate space is a virtual representation of the surrounding area. However, the primary coordinate space is a representation of the real world around the user and serves as the foundation onto which AR content is overlaid. The primary coordinate space includes a plurality of primary multidimensional coordinates and each primary multidimensional coordinate is representative of the visual data associated to at least one geospatial coordinate. Accordingly, the present invention forms a contextualized model of the external environment where each primary multidimensional coordinate contains the raw environmental data that is associated with a corresponding geospatial coordinate from the geospatial data. The primary multidimensional coordinate is used to represent a single point in a cartesian coordinate space, or a desired volume of space within the external environment. Thus, the user is able to increase or decrease the network resources required to navigate the AR environment by modifying the size of each of the plurality of primary multidimensional coordinates. Thereby increasing or decreasing the resolution of the AR environment. In supplemental embodiments, the method of the present invention makes use of plane detection to parse the external environment into usable segments.

Referring to FIG. 2 and FIG. 3, the overall method of the present invention continues by generating an AR coordinate space by mapping the secondary coordinate space onto the primary coordinate space with the remote server (Step G). To facilitate this, the secondary coordinate space includes a plurality of secondary multidimensional coordinates, wherein each secondary multidimensional coordinate is representative of AR content data associated to at least one virtual spatial coordinate. The AR content data includes all features, objects and invisible position or sensor based rulesets that are included in the AR environment. The virtual spatial coordinate is used to represent a single point in a cartesian coordinate space, or a desired volume of space within the AR environment. The method of the present invention then maps each secondary multidimensional coordinate to a corresponding primary multidimensional coordinate with the remote server. Thus, the method of the present invention is able to render the appropriate AR fixtures in the appropriate positions within the AR environment that is overlaid onto the real world.

The overall method of the present invention continues by outputting the AR environment with the user computing device (Step H). As a result, the user is able to view and interact with the AR content through the user computing device. The method of the present invention is designed to track a user’s position and update the AR environment as the user moves through the real world. To achieve this functionality, the overall method of the present invention dynamically correlates AR environment to new raw environmental data with the remote server (Step I). That is, the method of the present invention continuously tracks the user’s position and identifies any changes in the area surrounding the user in order to accurately render the AR environment regardless of where the user travels. Preferably, the method of the present invention employs computer vision and AI processing techniques to make use of techniques including, but not limited to simultaneous localization and mapping and structure from motion mapping. The overall method continues by prompting the user to select a desired AR execution process from the plurality of AR execution processes, with the user computing device (Step J). By this stage of the overall method of the present invention, the AR environment is overlaid onto the real world and the system is tracking the user’s movement. The user is now given freedom to simply explore the AR environment or interact with the AR content. Any interaction the user has with AR content is intended to be seen as selecting the desired AR execution process. The overall method of the present invention then continuously tracks the user’s position in real time and updates the AR environment accordingly.

Referring to FIG. 4 and FIG. 12 through FIG. 14, it is an aim of the present invention to provide a content-rich AR interface to the user. To that end, the AR environment includes a plurality of static environmental features, and a plurality of dynamic environmental features.

Each of the plurality of static environmental features represents an AR asset that cannot move or provide opportunities for user interaction. For example, columns, plants, or any type of structure that are intended to remain fixed, are seen as static environmental features. Each of the plurality of dynamic environmental features represents an AR asset that can be moved and provides opportunities for user interaction. For example, the dynamic environmental feature may refer to objects including, but not limited to, doors, products in a store and, digital assistants. Additionally, each of the plurality of static environmental features is associated to at least one static multidimensional coordinate and each of the plurality of dynamic environmental features is associated to at least one dynamic multidimensional coordinate. Thus, the plurality of static environmental features and the plurality of dynamic environmental features can be accurately placed within the AR environment, tracked, and updated when the user moves through the external environment. Referring to FIG. 5 and FIG. 13, it is an aim of the present invention to provide an AR interface which facilitates browsing and purchasing items. To achieve this functionality, the system used to execute the method of the present invention provides a plurality of vendor profiles managed by the remote server. Each vendor profile provides a means by which a vendor can employ the method of the present invention to provide advertising and inventory content to the user. To facilitate this, each vendor profile includes at least one geofenced area, a store layout, and a plurality of inventory items. The geofenced area represents the physical space the vendor uses for retail and the store layout contains the secondary coordinate system that dictates the location of in-store items that include, but are not limited to, displays, shelves, point of sales terminals, and virtual billboards. Each of the plurality of inventory items is a dynamic environmental feature with which the user can interact. The present invention includes a subprocess that enables the vendor account to create retail AR environments within physical retail spaces that enable the user to browse inventory and receive additional informational content, including, but not limited to, nutritional facts, price comparisons, and item availability. This subprocess runs in the background and is triggered when the user enters an area dedicated to a relevant vendor.

The subprocess begins by identifying at least one relevant vendor profile with the remote server by comparing a current position of the user to the geofenced area of each of the plurality of vendor profiles with the remote server. The current position is determined by the positioning system within the user computing device. The relevant profile is selected from the plurality of vendor profiles and contains the data required to generate the AR environment within the vendor’s retail space. In some embodiments, the present invention relies on additional datapoints such as interest points and fiducial markers to determine the user’ s location. The subprocess continues by structuring the secondary coordinate space around the store layout with the remote server. Thus, the method of the present invention is able to dynamically generate additional vendor-specific subsections for a larger AR environment. The subprocess continues by assigning each of the plurality of inventory items to at least one corresponding dynamic coordinate with the remote server. Thus, the vendor’s current stock, or a desired subsection thereof, is distributed through the retail AR environment. This functionality enables the vendor to change the location of inventory items depending on factors including, but not limited to, price, season, and item popularity. The fully stocked and rendered retail AR environment is mapped onto the primary coordinate system and output to the user through the user computing device. The subprocess continues by scanning the external environment with the user computing device to determine if a real-world product is located in a relevant multidimensional coordinate from the plurality of primary multidimensional coordinates. Consequently, the subprocess enables the method of the present invention to determine if the retail space has real-world inventory available on the shelves. The subprocess continues by integrating each of the plurality of inventory items into the AR coordinate space at a corresponding dynamic AR coordinate if a real-world product is not located in the relevant geospatial coordinate. The subprocess concludes by updating the AR environment according to the AR coordinate space with the remote server. Thus, the method of the present invention is able to virtually stock the shelves within the retail space. This enables the user to quickly find and purchase products without needing to call for help. In some embodiments, the user is able to add a plurality of desired inventory items to their shopping cart and the method of the present invention will instruct the vendor to have an associate retrieve the physical product for the user, while the user shops. In supplemental embodiments, the user simply scans a barcode or interest point associate to the inventory item and the subprocess adds the item to a virtual shopping cart that is added to the user’s physical shopping cart during checkout. In further embodiments, the vendor is able to reach out to the user directly bthough a vendor computing device.

Referring to FIG. 6, FIG. 13, and FIG. 14,, it is an aim of the present invention to provide interactive signage and informational displays. To facilitate this functionality, the system used to execute the method of the present invention provides at least one virtual billboard managed by the remote server. The virtual billboard is a dynamic environmental feature that is rendered as a placard in the AR environment. Additionally, the virtual billboard includes a backboard at least one information display portion, and at least one user input portion. The backboard forms a static plane onto which content is superimposed. Additionally, the backboard is subdivided into a plurality of content-specific zones such that the user is able to access different forms of AR content by viewing or interacting with the virtual billboard. The information display portion is a section of the virtual billboard that is dedicated to outputting AR content without responding to AR execution processes. The user input portion is a section of the virtual billboard that is dedicated to receiving user input. Supplemental embodiments include virtual billboards where each of the plurality of content-specific zones is dedicated to a different AR execution process or AR service process. A subprocess for generating the virtual billboard begins by superimposing the information display portion onto the backboard with the remote server. The subprocess continues by superimposing the user input portion onto the backboard at a position offset from the information display portion, with the remote server. Thus, the subprocess formats the virtual billboard to both provide information to the user and receive input from the user. Preferably, each of the plurality of vendor profiles includes informational content and interactive content. The subprocess continues by integrating informational content into the information display portion and integrating interactive content into the user input portion with the remote server. In some embodiments, the subprocess references behavioral data and user preferences found in the user account to determine which informational content and which interactive content should be provided to the user by the virtual billboard. The subprocess concludes by integrating the virtual billboard into the secondary coordinate space as a dynamic environmental feature with the remote server. The virtual billboard can then be rendered when the AR environment is generated.

In some embodiments, the virtual billboard is used as a menu that is projected into the AR environment in response to the user selecting the desired AR execution process. In further embodiments, the user is able to open interactive applications within the AR environment. For example, the desired AR execution process may direct the method of the present invention to generate a virtual billboard that is a movie screen which is projected into the AR environment and follows the user around as they navigate the real world. Similarly, a plurality of virtual billboards can be used to enable the user to access multiple different applications, simultaneously. The desired AR execution process enables the user to select which of the plurality of virtual billboards should be opened or remain hidden.

Referring to FIG. 7, as described above, the method of the present invention is designed to dynamically update the AR environment based on user position and camera orientation. A subprocess to facilitate this functionality begins by continuously capturing new environmental data with the user computing device. Preferably, this new environmental data comprises real time video data and location data. The subprocess then proceeds to update the AR environment by mapping the secondary coordinate space onto the primary coordinate space with the remote server and outputting the AR environment with user computing device. Consequently, the AR environment is constantly refreshed to accommodate for changes in user position and camera orientation.

Referring to FIG. 8, the method of the present invention is designed with a plurality of subprocess that assist the user when performing various tasks. The method of the present invention executes one such subprocess when the desired AR service process is a navigation process. This subprocess begins by prompting the user to enter at least one desired destination. The subprocess continues by generating a route from the user’s current location to the desired destination with the remote server. By default, the present invention is designed to identify the most efficient route from the user’s current location to the desired destination. However, embodiments of the present invention determine the best route to the desired destination by evaluating secondary conditions, including but not limited to, crowd size, time of day, and the operational capacity of systems such as escalators and elevators. The subprocess continues by integrating an AR overlay of the route into the AR environment with the remote server. Thus, the user is presented with a visual cue to direct them toward the desired destination. The subprocess concludes by navigating to the desired destination with the user computing device. Preferably the user is provided with turn-by-turn directions.

Referring to FIG. 9, another user assistance subprocess is executed when the desired AR service process is a digital assistant process. To facilitate this, the system for executing the method of the present invention provides at least one digital assistant managed by the remote server. Additionally, the digital assistant is associated to a plurality of assistant commands. The digital assistant is an artificial intelligence (AI) based program that is designed to respond to the user’ s queries and provide relevant information when navigating to the desired destination. The subprocess begins by integrating the digital assistant into the AR environment with the remote server. And continues by prompting to enter at least one desired assistant command from the plurality of assistant commands with the user computing device. The desired assistant command enables the user to direct the assistant to perform a desired action. For example, the user may enter the desired assistant command to direct the digital assistant to set an appointment in their calendar, to search for a specific item’s availability, or to tell them the weather.

The method of the present invention provides an interface for third party integration. This enables third party vendors and systems to create content that can be presented to the user through the method of the present invention. For example, when the user creates their user account it is possible to link the user account to a third-party social media account so that the user can live stream their AR experiences to friends. Another important feature is the present invention’s ability to overlay any desired AR coordinate space onto the primary coordinate space. This functionality enables the method of the present invention to be employed by video game developers, travel agencies, and educators to provide immersive AR environments to the user. In some embodiments a plurality of users are able to experience a shared AR envitonment, each through a personal user computing device.

With reference to FIG. 10, a system consistent with an embodiment of the disclosure may include a computing device or cloud service, such as computing device 1100. In a basic configuration, computing device 1100 may include at least one processing unit 1102 and a system memory 1104. Depending on the configuration and type of computing device, system memory 1104 may comprise, but is not limited to, volatile (e.g. random-access memory (RAM)), non-volatile (e.g. read-only memory (ROM)), flash memory, or any combination. System memory 1104 may include operating system 1105, one or more programming modules 1106, and may include a program data 1107. Operating system 1105, for example, may be suitable for controlling computing device 1100’s operation. In one embodiment, programming modules 1106 may include image-processing module, machine learning module. Furthermore, embodiments of the disclosure may be practiced in conjunction with a graphics library, other operating systems, or any other application program and is not limited to any particular application or system. This basic configuration is illustrated in FIG. 10 by those components within a dashed line 1108.

Computing device 1100 may have additional features or functionality. For example, computing device 1100 may also include additional data storage devices (removable and/or non removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in FIG. 10 by a removable storage 1109 and a non-removable storage 1110. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer- readable instructions, data structures, program modules, or other data. System memory 1104, removable storage 1109, and non-removable storage 1110 are all computer storage media examples (i.e., memory storage.) Computer storage media may include, but is not limited to, RAM, ROM, electrically erasable read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store information and which can be accessed by computing device 1100. Any such computer storage media may be part of device 1100. Computing device 1100 may also have input device(s) 1112 such as a keyboard, a mouse, a pen, a sound input device, a touch input device, a location sensor, a camera, a biometric sensor, etc. Output device(s) 1114 such as a display, speakers, a printer, etc. may also be included. The aforementioned devices are examples and others may be used.

Computing device 1100 may also contain a communication connection 1116 that may allow device 1100 to communicate with other computing devices 1118, such as over a network in a distributed computing environment, for example, an intranet or the Internet. Communication connection 1116 is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” may describe a signal that has one or more characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media may include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, radio frequency (RF), infrared, and other wireless media. The term computer readable media as used herein may include both storage media and communication media.

As stated above, a number of program modules and data files may be stored in system memory 1104, including operating system 1105. While executing on processing unit 1102, programming modules 1106 (e.g., application 1120) may perform processes including, for example, one or more stages of methods, algorithms, systems, applications, servers, databases as described above. The aforementioned process is an example, and processing unit 1102 may perform other processes. Other programming modules that may be used in accordance with embodiments of the present disclosure may include machine learning applications.

Generally, consistent with embodiments of the disclosure, program modules may include routines, programs, components, data structures, and other types of structures that may perform particular tasks or that may implement particular abstract data types. Moreover, embodiments of the disclosure may be practiced with other computer system configurations, including hand-held devices, general purpose graphics processor-based systems, multiprocessor systems, microprocessor-based or programmable consumer electronics, application specific integrated circuit-based electronics, minicomputers, mainframe computers, and the like. Embodiments of the disclosure may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote memory storage devices.

Furthermore, embodiments of the disclosure may be practiced in an electrical circuit comprising discrete electronic elements, packaged or integrated electronic chips containing logic gates, a circuit utilizing a microprocessor, or on a single chip containing electronic elements or microprocessors. Embodiments of the disclosure may also be practiced using other technologies capable of performing logical operations such as, for example, AND, OR, and NOT, including but not limited to mechanical, optical, fluidic, and quantum technologies. In addition, embodiments of the disclosure may be practiced within a general-purpose computer or in any other circuits or systems.

Embodiments of the disclosure, for example, may be implemented as a computer process (method), a computing system, or as an article of manufacture, such as a computer program product or computer readable media. The computer program product may be a computer storage media readable by a computer system and encoding a computer program of instructions for executing a computer process. The computer program product may also be a propagated signal on a carrier readable by a computing system and encoding a computer program of instructions for executing a computer process. Accordingly, the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.). In other words, embodiments of the present disclosure may take the form of a computer program product on a computer-usable or computer-readable storage medium having computer-usable or computer-readable program code embodied in the medium for use by or in connection with an instruction execution system. A computer-usable or computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.

The computer-usable or computer-readable medium may be, for example but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. More specific computer-readable medium examples (a non-exhaustive list), the computer-readable medium may include the following: an electrical connection having one or more wires, a portable computer diskette, a random-access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, and a portable compact disc read-only memory (CD-ROM). Note that the computer-usable or computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted, or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.

Embodiments of the present disclosure, for example, are described above with reference to block diagrams and/or operational illustrations of methods, systems, and computer program products according to embodiments of the disclosure. The functions/acts noted in the blocks may occur out of the order as shown in any flowchart. For example, two blocks shown in succession may in fact be executed substantially concurrently or the blocks may sometimes be executed in the reverse order, depending upon the functionality /acts involved.

While certain embodiments of the disclosure have been described, other embodiments may exist. Furthermore, although embodiments of the present disclosure have been described as being associated with data stored in memory and other storage mediums, data can also be stored on or read from other types of computer-readable media, such as secondary storage devices, like hard disks, solid state storage (e.g., USB drive), or a CD-ROM, a carrier wave from the Internet, or other forms of RAM or ROM. Further, the disclosed methods’ stages may be modified in any manner, including by reordering stages and/or inserting or deleting stages, without departing from the disclosure.

Although the invention has been explained in relation to its preferred embodiment, it is to be understood that many other possible modifications and variations can be made without departing from the spirit and scope of the invention as hereinafter claimed.