Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
3D VIDEO GENERATION FOR SHOWING SHORTEST PATH TO DESTINATION
Document Type and Number:
WIPO Patent Application WO/2021/090219
Kind Code:
A1
Abstract:
A computer-implemented method for an indoor navigation. A request is received from a user device for an indoor direction from a first location to a second location. The request includes location information of the user device, and the user device is free from a navigation information from the first location to the second location on the user device. The one of more structures covered by the first location and the second location based on the location information are determined. Based on determining, a map of the one or more structures from a data storage is retrieved. A 3D indoor navigation video is generated showing a direction from the first location to the second location. The 3D indoor navigation video is transmitted to the user device.

Inventors:
CHAN WISDOM (CN)
LUK CHUEN KIT (CN)
Application Number:
PCT/IB2020/060398
Publication Date:
May 14, 2021
Filing Date:
November 05, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CHAIN TECH DEVELOPMENT CO LTD (CN)
International Classes:
G06F16/29; G01C21/00; G06F16/487
Foreign References:
CN107631726A2018-01-26
CN108363086A2018-08-03
CN108573293A2018-09-25
US20150185022A12015-07-02
CN101750072A2010-06-23
CN1924524A2007-03-07
US20020167408A12002-11-14
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A computer-implemented method for an indoor navigation comprising: receiving a request from a user device for an indoor direction from a first location to a second location, the request comprising location information of the user device, said user device being free from a navigation information from the first location to the second location on the user device; determining one of more structures covered by the first location and the second location based on the location information; based on determining, retrieving a map of the one or more structures from a data storage; generating a 3D indoor navigation video showing the indoor direction from the first location to the second location; and transmitting the 3D indoor navigation video to the user device.

2. The computer-implemented method of claim 1, wherein the 3D indoor navigation video comprises an animated 3D indoor navigation video.

3. The computer-implemented method of claim 1, wherein the location information comprises one or more of the following: global position system (GPS) data, data from BLUETOOTH radio transmitters, and data from WI-FI modules.

4. The computer-implemented method of claim 1 , wherein the one or more structures comprise a floor plan of an airport.

5. The computer-implemented method of claim 1 , wherein the one or more structures comprise a floor plan of a shopping mall.

6. The computer-implemented method of claim 1 , wherein the user device comprises a smartphone.

7. The computer-implemented method of claim 1 , wherein receiving the request from the user device comprises receiving the request via a WI-FI connection or a BLUETOOTH connection.

8. A computer-implemented system for an indoor navigation comprising: a distributed data storage for storing one or more maps of indoor structures; a communication network coupled to the distributed data storage and a cloud server; wherein the cloud server is configured to access the one or more maps stored in the distributed data storage and is configured to process computer-executable instructions, said computer-executable instructions comprising: receiving a request from a user device for an indoor direction from a first location to a second location, the request comprising location information of the user device, said user device being free from a navigation information from the first location to the second location on the user device; determining one of more structures covered by the first location and the second location based on the location information; based on determining, retrieving a map of the one or more structures from the distributed data storage; generating a 3D indoor navigation video showing the indoor direction from the first location to the second location; and transmitting the 3D indoor navigation video to the user device.

9. The computer-implemented system of claim 8, wherein the 3D indoor navigation video comprises an animated 3D indoor navigation video.

10. The computer-implemented system of claim 8, wherein the location information comprises one or more of the following: global position system (GPS) data, data from BLUETOOTH radio transmitters, and data from WI-FI modules.

11. The computer-implemented method of claim 8, wherein the one or more structures comprise structures of an airport.

12. The computer-implemented method of claim 8, wherein the one or more structures comprise structures of a shopping mall.

13. The computer-implemented method of claim 8, wherein receiving the request from the user device comprises receiving the request via a WI-FI connection or a BLUETOOTH connection.

14. A tangible non-transitory computer-readable medium having stored thereon computer-executable instructions for an indoor navigation, said computer-executable instructions comprising: receiving a request from a user device for an indoor direction from a first location to a second location, the request comprising location information of the user device, said user device being free from a navigation information from the first location to the second location on the user device; determining one of more structures covered by the first location and the second location based on the location information; based on determining, retrieving a map of the one or more structures from a data storage; generating a 3D indoor navigation video showing the indoor direction from the first location to the second location; and transmitting the 3D indoor navigation video to the user device.

15. The tangible non-transitory computer-readable medium of claim 14, wherein the 3D indoor navigation video comprises an animated 3D indoor navigation video.

16. The tangible non-transitory computer-readable medium of claim 14, wherein the location information comprises one or more of the following: global position system (GPS) data, data from BLUETOOTH radio transmitters, and data from WI-FI modules.

17. The tangible non-transitory computer-readable medium of claim 14, wherein the one or more structures comprise structures of an airport.

18. The tangible non-transitory computer-readable medium of claim 14, wherein the one or more structures comprise structures of a shopping mall.

19. The tangible non-transitory computer-readable medium of claim 14, wherein the user device comprises a smartphone.

20. The tangible non-transitory computer-readable medium of claim 14, wherein receiving the request from the user device comprises receiving the request via a WI FI connection or a BLUETOOTH connection.

Description:
D VIDEO GENERATION FOR SHOWING SHORTEST PATH TO DESTINATION

Technical Field

[0001] Embodiments discussed herein generally relate to providing a navigation video.

Background

[0002] Individuals with smartphones are used to have cellular data from their cellular network to provide position information. For example, smartphones are capable of use one or more hardware elements from the smartphones, coupled with the cellular network, to determine a position of the headset. These hardware elements include, but not limited to, WI-FI® module, BLUETOOTH®, cellular network transceivers, Near Field Communication (NFC) module, and a global position system (GPS) module. The smartphone (or the user may download one) may provide a map software to complete the position provisioning.

[0003] When individuals are within an indoor space trying to go from point A to point B — especially when the individuals are new to the indoor space — however, many of the positioning capabilities described above are constrained due to the physical barriers of building structures and/or radio wave interferences. As such, many users experience lack of responses or delayed responses from the map software, which may further lead to frustrations.

[0004] To alleviate the shortcomings of poor or delayed responses, many existing approaches may require users to pre-download the relevant maps before arriving at the indoor space. For example, suppose a user is about to go shopping at a shopping mall, the user may download the floor plan of the shopping in advance. In another example, suppose a user is to fly to an airport, the user may also download the floor plan of the airport before arriving.

[0005] However, this approach has one main drawbacks because the user typically forget to download the map or floor plan in advance. Sometimes, the floor plan is not available or not updated. In addition, sometimes the airport that the user is visiting may be new or has added a new wing or terminal that is not available for download. As such, such approach is not feasible and convenient.

[0006] In addition, even if the floor plan or map is downloaded, the user may still experience slow response or poor reception of signals from the cellular network. While more indoor spaces are making WI-FI available for users, the speed is typically slow. Moreover, even if with the WI-FI connection, existing path determination only supports 2D maps. Some newer implementations may enhance the experience by having the users use an augmented reality (AR) glasses or goggles to experience the direction. Flowever, such AR navigation still provides a poor user experience when viewing the path through the camera.

[0007] Therefore, embodiments attempt to create a technical solution to address the deficiencies of the challenges above.

Summary

[0008] Embodiments create a technical solution to the above challenges by building a comprehensive 3D navigation video for indoor navigation when GPS signal is not available or poor. Moreover, aspects of the invention alleviate issues when the mobile device that the user is using does not have a preloaded map to navigate the indoor space he or she is traveling.

Brief description of the drawings

[0009] Persons of ordinary skill in the art may appreciate that elements in the figures are illustrated for simplicity and clarity so not all connections and options have been shown. For example, common but well-understood elements that are useful or necessary in a commercially feasible embodiment may often not be depicted in order to facilitate a less obstructed view of these various embodiments of the present disclosure. It may be further appreciated that certain actions and/or steps may be described or depicted in a particular order of occurrence while those skilled in the art may understand that such specificity with respect to sequence is not actually required. It may also be understood that the terms and expressions used herein may be defined with respect to their corresponding respective areas of inquiry and study except where specific meanings have otherwise been set forth herein. [0010] FIG. 1 is a diagram illustrating a system according to one embodiment.

[0011] FIGS. 2A to 2D are graphical user interfaces (GUIs) for an application installed on a user device according to one embodiment.

[0012] FIG. 3 is a flow diagram illustrating a computer-implemented method for generating a 3D video for an indoor navigation according to one embodiment.

[0013] FIG. 4 is a diagram illustrating a tangible non-transitory computer- readable medium according to one embodiment.

[0014] FIG. 5 is a diagram illustrating a portable computing device according to one embodiment.

[0015] FIG. 6 is a diagram illustrating a computing device according to one embodiment.

Detailed Description

[0016] Embodiments may now be described more fully with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments which may be practiced. These illustrations and exemplary embodiments may be presented with the understanding that the present disclosure is an exemplification of the principles of one or more embodiments and may not be intended to limit any one of the embodiments illustrated. Embodiments may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure may be thorough and complete, and may fully convey the scope of embodiments to those skilled in the art. Among other things, the present invention may be embodied as methods, systems, computer readable media, apparatuses, or devices. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. The following detailed description may, therefore, not to be taken in a limiting sense.

[0017] Embodiments may create a system that generates a 3D indoor navigation video for users who are in an indoor space (e.g., airports, shopping malls, museums, etc.) where some of the existing network and navigation approaches fail to provide the needed navigation. Moreover, some existing approaches require users to pre-download the map in advance before arriving or visiting the location.

[0018] Referring now to FIG. 1, a system 100 may include a distributed or cloud server 102 for generating a 3D indoor navigation video according to one embodiment. In one embodiment, the server 102 may be a cluster of server computing devices (e.g., device 841 in FIG. 6) that provide services to a user device 104 of a user 106. In one embodiment, the user device 104 may be a smartphone, a smart watch, a pair of smart glasses, or other devices that have at least a portion of components depicted in FIG. 5 below. In particular, the user device 104 may include a wireless transceiver 108 for transmitting wireless signals to other devices. For example, the wireless transceiver 108 may include a WI-FI module, a BLUETOOTFI module, a NFC module, or the like.

[0019] According to another embodiment, the server 102 may further be coupled to a database or data storage 110, and the data storage 110 may also be deployed in a distributed manner. In one embodiment, the server 102 and the user device 104 may be connected via a network 112 either with a wired connection or a wireless connection. In one embodiment, the database 110 may include location fingerprints.

[0020] In one aspect, the user 106 may visit an indoor space 114 and may wish to go from a point A 116 to a point B 118 inside the indoor space 114. In another aspect, the user 106 may have not travelled to the indoor space 114 and therefore may be unfamiliar with layout, configuration, etc., of the indoor space 114. For example, the indoor space 114 may be an international airport with one or more terminals. The user 106 may be visiting the airport for the first time and, due to a flight schedule, may be staying at the airport for a few hours. Instead of asking for directions or search of a floor plan by visiting a kiosk scattered around the airport, the user 106 may wish using the user device 104 to navigate from point A 116 to point B 118. Or on the other hand, the staff at the airport may be limited to assist the user 106 due to time of the day. At the same time, the user device 104 may lack preloaded or pre downloaded app or map/floor plan of the indoor space 114. As such, the user 106 may be researching for a floor plan or a map of the indoor space 114. [0021] In a further aspect, the user 106 may also use the free WI-FI connection provided by the indoor space 114. In such an instance, instead of downloading an app for the airport, which requires additional time and storage on the user device 104, aspects of the invention provide a more convenient and ad-hoc provisioning of the needed information, such as a 3D indoor navigation video.

[0022] According to one embodiment, the server 102 may provide a portal (e.g., portal shown in FIG. 2A) to receive a request from the user 106. For example, referring now to FIG. 2A, a portal 200 for the user 106 to navigate within the indoor space 114. The portal 200 may, for example, include graphical user interface elements to enable the user 106 to navigate the portal 200. For one instance, the portal 200 may include a welcome message 202, a button 204 for enabling the user 106 to search for a point of interest or stores in the indoor space 114. The portal 200 may also include a button 206 for providing a navigation video, such as the 3D indoor navigation video. The portal 200, in another example, may include a button 208 for other information of the indoor space 114. The portal 200 may further include additional features or buttons such as a button 212 for the user 106 to connect the user device 104 to the server 102 via WI-FI; a button 214 for the user 106 to connect the user device 104 to the server 102 via BLUETOOTFI, and a button 216 for the user 106 to turn on BLUETOOTFI or WI-FI. These buttons may not be selected if the user device 104 is already connected to the WI-FI connection provided by the indoor space 114 or already has the BLUETOOTFI module turned on. As the user 106 wishes to navigate from point A 116 to point B 118, the user 106 may select the button 206 as a first step of requesting the 3D indoor navigation video.

[0023] Referring now to FIG. 2B, in response to the selection of the button 206, the portal 200 may provide another GUI 210 before the 3D indoor navigation video is generated. In one embodiment, the GUI 210 may provide a box 220 for the user 106 to input a source location (e.g., a first location) to initiate the navigation. In one aspect, the user 106 may select an option 224 “DETERMINE BY BEACON” or an option 226 “USE PHOTO”. In one example, the option 224 enable the user 106 to use the various wireless signals generated from the user device 104 to communicate with the server 102 to indicate the source location. For example, referring back to FIG. 1, the indoor space 114 may include one or more beacons 120 scattered around the indoor space 114 for sensing and communicating with user devices, such as the user device 104. The beacons 120 may be a wireless communication device that may be capable of using BLUETOOTFI specification to determine proximity between the user device 104 and the beacon 120 (e.g., based on BLUETOOTFI signal strength). Once the proximity information is determined, the beacon 120 may then trigger its WI-FI module to transmit data to the user device 104 due to the higher bandwidth allowed under the WI-FI specification.

[0024] As such, if the user 106 selects the option 224, the user device 104 may communicate with nearby beacons 120 to generate a source location information of the user device 104. For example, the user device 104 may record the signal strength from each of the beacons around the user 106, and when the user device 104 may provide beacon information to the system 100 so that the system 100 may estimate a position of the user 106. In one example, the system 100 may employ a received signal strength indication (RSSI) method, which measures the power of received radio signal from all beacons and estimate the position by combining all the collected information using certain triangulation models. This may require the whole map of the indoor space and usually gives a bad estimation due to interference.

[0025] In another embodiment, the system 100 may estimate the position by location fingerprinting, which records the signal information at different positions of the indoor space and store the information into database as a location fingerprint. Then, whenever a user's position is to be estimated, the system may review a closest match with the received signal in the database. Depending on the existing methods employed by the indoor space 114 (e.g., RSSI or the location fingerprint), the system 100 may be able to determine the position under this option 224.

[0026] If the user 106 selects the option 226, the user 106 may provide one or more pictures of the user’s surroundings (e.g., via a camera of the user device 104) to the server 102 such that the server 102 may determine the source location information of the user device 104. For example, the server 102 may analyze the photo(s) from the user device 104 by scanning any store name in the photo(s) or other identifiable information from the photos.

[0027] In a further embodiment, once the user 106 completes inputting the source location information, the user 106 may provide a destination information (e.g., second location information) via 222. For example, the user 106 may, via the user device 104, enter the destination information via a photo (via option 228), an audio description (via option 230), or a written description via texts (via option 232). In one example, the user 106 may provide a picture of a store in the indoor space 114 as the destination information. In another embodiment, the user 106 may provide a picture of a post office so that the user 106 may visit it in the indoor space 114. In yet another example, the user 106 may speak to a microphone of the user device 104 to indicate the destination information after selecting the option 230.

Lastly, the user 106 may type in the name of the destination after selecting the option 232. Once the source information and the destination information have been determined, the user 106 may select a button 234 “SEND REQUEST” to request the server 102 to generate the 3D indoor navigation video or a button 236 “CANCEL” to cancel the request and exit the GUI 210.

[0028] In one embodiment, the user device 104 may display or provide one or more notification once receiving the source information and the destination information to the user 106. For example, the user device 104 may provide a dialog box (not shown) to confirm the received voice input after the option 230 has been selected. In another embodiment, the received voice input may be converted to texts in the dialog box.

[0029] In response to the button 234 being selected, a request 122 from the user device 104 for a direction from the point A 116 to point B 118 is sent to the server 102. Returning now to FIG. 1 , once the server 102 receives the request 122 via the network 112, the server 102 may call the data storage 110 to retrieve a digital map or floorplan information of the space 114. Once retrieved, the server 102 may first review the source information and the destination information from the request 122. For example, the source information and the destination information is analyzed as compared to the map of the space 114 so that the server 102 may orient or position the source and the destination on the map of the space 114. In another example, supposed the source information is received from beacons after the user 106 chose the option 224. With such source information from the beacons, the source information may include at least one or more of the following information: location(s) of the beacon(s) 120, signal strength received at each of the beacons, calculations of the signal strength by determining distance(s) of the user device 104 relative to the location(s) of the beacon(s), or the like.

In a further example, supposed the source information is received from pictures from the user device 104. The source information may then include at least one or more of the following: photos or pictures of the surroundings, metadata of the photos or pictures, which may include a general location information, a time of the photo or picture taken, or the like.

[0030] In one aspect, the server 102 may execute an optical character recognition (OCR) function or routine to identify characters on storefront, signage, exit signs, etc., from the pictures or photos. The characters may then be compared with a list of tenants and the corresponding tenant spaces at the airport. Alternatively, the server 102 may scan the photo for specific symbols used, such as exit sign, bathroom graphics or icon, and other symbols that the server 102 may use to identify the location. For example, based on the combination of identified symbols and characters, the server 102 may narrow down with a high degree of certainty where the location of the source is. Of course, if the user is in a stairwell and the photos given to the server 102 may not specifically identify with a high degree of certainty where the source location is, the server 102 may respond to the request by asking the user 106 to take additional pictures that further describe the surroundings with guidance.

[0031] Moreover, the user 106 may indicate (after the photo is taken, for example) on the photo additional directional information, such as north, south, east, and west, etc.

[0032] As such, the server 102 may analyze or evaluate the source information to determine a location or position on the map orfloorplan of the space 114. [0033] Similarly, based on the destination information received, the server 102 may also determine a location or position on the map or floorplan of the space 114. For example, in response to the selection of the button 228, the server 102 may analyze the photo by determining objects in the photo to compare to the map or floorplan of the space 114. For example, the server 102 may execute an optical character recognition (OCR) function or routine to identify characters on storefront, signage, exit signs, etc. The characters may then be compared with a list of tenants and the corresponding tenant spaces at the airport. Alternatively, the server 102 may scan the photo for specific symbols used, such as exit sign, bathroom graphics or icon, and other symbols that the server 102 may use to identify the location. For example, based on the combination of identified symbols and characters, the server 102 may narrow down with a high degree of certainty where the location of the destination is.

[0034] Once the source and the destination have been determined, the server 102 may generate a 3D video. For example, the server 102 may call the data storage 110 to retrieve a 1 -to-1 scale digital 3D model of the space 114. Once retrieved, one or more cameras that are already in positions at the source position are mirrored in the digital 3D model. Also, the destination position is also projected or mirrored in the digital 3D model. The server 102 may further calculate a path of navigation based on one of the path selection algorithms such as Dijkstra’s algorithm or Breadth-first search algorithm to determine, for example, the shortest path from source to destination position.

[0035] In one embodiment, the server 102 may further determine the path based on additional factors or preferences, such as routing away sections of the space 114 that may be under construction or repair, routing away from sections of the space 114 that may be confusing even though the path may be the shortest route, routing away from areas of the space 114 that may be too crowded, routing away from areas where signage may be under repair, or other factors reflected by current situations of the space 114.

[0036] In another embodiment, the server 102 may also determine the path based on factors from selective store data. For examples, the server 102 may route the path in response to commercial sponsors to the system 100. In another example, the server 102 may route the path in response to government regulations, etc.

[0037] As such, it is to be understood that other factors may be incorporated into the path calculations or determinations without departing from the spirit and scope of the embodiments. In addition, the server 102 may receive manual updates (e.g., from administrators or managements at the space 114) or automated updates (e.g., flight cancelations) when generating the 3D video.

[0038] Once the server 102 have determined the path, the server 102 may generate the navigation via the model from the source position to the destination position to create a 3D video. In one example, the server 102 may create animated figure or representation as part of the navigation. In another example, the server 102 may simplify the 3D video by showing just an arrow, such as an arrow 268 in FIG. 2D so as to reduce the file size.

[0039] Once the 3D video is generated, the server 102 may be ready to transmit the 3D video through the network 112 for download to the user device 104.

[0040] In one example, referring now to FIG. 2C, another screenshot 240 may illustrate an initial screen before the video is downloaded to the user device 104. An indicia 242 may be provided to the user 106 that the video is being downloaded.

[0041] Referring now to FIG. 2D, an exemplary screenshot 244 showing a 3D video 246 shown inside a frame 248. In one example, the frame 248 may be defined by the display of the user device 104. The user device 104 may also include one or more video controls 250, such as a time or progress bar 252, a play button 254, a pause button 256, a replay button 258, and a progress indicator 260. The controls 250 may also include a download progress indicator 262 showing how much of the video 246 has been downloaded, a beginning time indicator 264 and an end time indicator 266. In one embodiment, the end time value (e.g., 6:30) may represent an estimated amount of time for the user 106 to travel from point A to point B.

[0042] In a further embodiment, the progress indicator 260 may be dynamically adjusted or moved based on one or more sensors available on the user device 104. For example, suppose the user device 104 include a gyroscope sensor, an accelerometer, the WI-FI transceiver, etc. As soon as the 3D video 246 has been downloaded to the user device 104, the play progress of the video 246 may automatically start and coincide with the movement of the user 106 from point A toward point B. In another embodiment, the user 106 may override the feature by selecting/pressing the play button 254 to view the entire video and have the opportunity to replay or pause it by selecting the appropriate buttons in FIG. 2D.

[0043] Referring now to FIG. 3, a flow chart illustrating a method according to one embodiment. At 302, a server may receive a request from a user device for an indoor direction from a first location to a second location. In one example, the request includes location information of the user device. In another example, the user device is free from a navigation information (e.g., a map or a floor plan) from the first location to the second location on the user device. In one example, the server may receive pictures or beacon information as part of the request.

[0044] At 304, the server may determine one of more structures covered by the first location and the second location based on the location information. In one example, the server may determine the structures based on optical character recognition or beacon station locations. At 306, based on determining, the server may retrieve a map of the one or more structures from a data storage. For example, the map may be a floor plan. At 308, the server may generate a 3D indoor navigation video showing the indoor direction from the first location to the second location. The server may further transmit the 3D indoor navigation video to the user device.

[0045] Referring now to FIG. 4, a diagram illustrates a tangible non-transitory computer-readable medium 400 according to one embodiment. In one embodiment, the medium 400 may include a request handling module 402 for storing computer-executable instructions for handling requests from the user 106, such as a request from a user device for an indoor direction from a first location to a second location. It is understood that the user device does not include the direction, map or a floor plan for guiding the user from first location to the second location. [0046] The medium 400 may further include a location information module 404 where one or more structures covered by the first location and the second location based on the location information is determined. A map retrieval module 406 may retrieve the map (or the floor plan) from a data storage. In one example, the data storage is a distributed storage such that it may be easily updated or transmitted to locations around the world.

[0047] In a further embodiment, a video generation module 408 may be included in the medium 400 for generating 3D video guiding the user from the first location to the second location. The medium 400 may include a data transmission module 410 for transmitting the 3D video to the user device.

[0048] FIG. 5 may be a high level illustration of a portable computing device 801 communicating with a remote computing device 841 in FIG. 6 but the application may be stored and accessed in a variety of ways. In addition, the application may be obtained in a variety of ways such as from an app store, from a web site, from a store Wi-Fi system, etc. There may be various versions of the application to take advantage of the benefits of different computing devices, different languages and different API platforms.

[0049] In one embodiment, a portable computing device 801 may be a mobile device 108 that operates using a portable power source 855 such as a battery. The portable computing device 801 may also have a display 802 which may or may not be a touch sensitive display. More specifically, the display 802 may have a capacitance sensor, for example, that may be used to provide input data to the portable computing device 801. In other embodiments, an input pad 804 such as arrows, scroll wheels, keyboards, etc., may be used to provide inputs to the portable computing device 801. In addition, the portable computing device 801 may have a microphone 806 which may accept and store verbal data, a camera 808 to accept images and a speaker 810 to communicate sounds.

[0050] The portable computing device 801 may be able to communicate with a computing device 841 or a plurality of computing devices 841 that make up a cloud of computing devices 841. The portable computing device 801 may be able to communicate in a variety of ways. In some embodiments, the communication may be wired such as through an Ethernet cable, a USB cable or RJ6 cable. In other embodiments, the communication may be wireless such as through Wi-Fi® (802.11 standard), BLUETOOTH, cellular communication or near field communication devices. The communication may be direct to the computing device 841 or may be through a communication network 102 such as cellular service, through the Internet, through a private network, through BLUETOOTH, etc., FIG. 5 may be a simplified illustration of the physical elements that make up a portable computing device 801 and FIG. 6 may be a simplified illustration of the physical elements that make up a server type computing device 841.

[0051] FIG. 5 may be a sample portable computing device 801 that is physically configured according to be part of the system. The portable computing device 801 may have a processor 850 that is physically configured according to computer executable instructions. It may have a portable power supply 855 such as a battery which may be rechargeable. It may also have a sound and video module 860 which assists in displaying video and sound and may turn off when not in use to conserve power and battery life. The portable computing device 801 may also have non-volatile memory 870 and volatile memory 865. It may have GPS capabilities 880 that may be a separate circuit or may be part of the processor 850. There also may be an input/output bus 875 that shuttles data to and from the various user input devices such as the microphone 806, the camera 808 and other inputs, such as the input pad 804, the display 802, and the speakers 810, etc., It also may control of communicating with the networks, either through wireless or wired devices. Of course, this is just one embodiment of the portable computing device 801 and the number and types of portable computing devices 801 is limited only by the imagination.

[0052] The physical elements that make up the remote computing device 841 may be further illustrated in FIG. 6. At a high level, the computing device 841 may include a digital storage such as a magnetic disk, an optical disk, flash storage, non-volatile storage, etc. Structured data may be stored in the digital storage such as in a database. The server 841 may have a processor 1000 that is physically configured according to computer executable instructions. It may also have a sound and video module 1005 which assists in displaying video and sound and may turn off when not in use to conserve power and battery life. The server 841 may also have volatile memory 1010 and non volatile memory 1015.

[0053] The database 1025 may be stored in the memory 1010 or 1015 or may be separate. The database 1025 may also be part of a cloud of computing device 841 and may be stored in a distributed manner across a plurality of computing devices 841. There also may be an input/output bus 1020 that shuttles data to and from the various user input devices such as the microphone 806, the camera 808, the inputs such as the input pad 804, the display 802, and the speakers 810, etc., The input/output bus 1020 also may control of communicating with the networks, either through wireless or wired devices. In some embodiments, the application may be on the local computing device 801 and in other embodiments, the application may be remote 841. Of course, this is just one embodiment of the server 841 and the number and types of portable computing devices 841 is limited only by the imagination.

[0054] The user devices, computers and servers described herein may be computers that may have, among other elements, a microprocessor (such as from the Intel® Corporation, AMD®, ARM®, Qualcomm®, or MediaTek®); volatile and non- volatile memory; one or more mass storage devices (e.g., a hard drive); various user input devices, such as a mouse, a keyboard, or a microphone; and a video display system. The user devices, computers and servers described herein may be running on any one of many operating systems including, but not limited to WINDOWS®, UNIX®, LINUX®, MAC® OS®, iOS®, or Android®. It is contemplated, however, that any suitable operating system may be used for the present invention. The servers may be a cluster of web servers, which may each be LINUX® based and supported by a load balancer that decides which of the cluster of web servers should process a request based upon the current request-load of the available server(s).

[0055] The user devices, computers and servers described herein may communicate via networks, including the Internet, wide area network (WAN), local area network (LAN), Wi-Fi®, other computer networks (now known or invented in the future), and/or any combination of the foregoing. It should be understood by those of ordinary skill in the art having the present specification, drawings, and claims before them that networks may connect the various components over any combination of wired and wireless conduits, including copper, fiber optic, microwaves, and other forms of radio frequency, electrical and/or optical communication techniques. It should also be understood that any network may be connected to any other network in a different manner. The interconnections between computers and servers in system are examples. Any device described herein may communicate with any other device via one or more networks.

[0056] The example embodiments may include additional devices and networks beyond those shown. Further, the functionality described as being performed by one device may be distributed and performed by two or more devices. Multiple devices may also be combined into a single device, which may perform the functionality of the combined devices.

[0057] The various participants and elements described herein may operate one or more computer apparatuses to facilitate the functions described herein. Any of the elements in the above-described Figures, including any servers, user devices, or databases, may use any suitable number of subsystems to facilitate the functions described herein.

[0058] Any of the software components or functions described in this application, may be implemented as software code or computer readable instructions that may be executed by at least one processor using any suitable computer language such as, for example, Java, C++, or Perl using, for example, conventional or object-oriented techniques.

[0059] The software code may be stored as a series of instructions or commands on a non-transitory computer readable medium, such as a random access memory (RAM), a read only memory (ROM), a magnetic medium such as a hard-drive or a floppy disk, or an optical medium such as a CD-ROM.

Any such computer readable medium may reside on or within a single computational apparatus and may be present on or within different computational apparatuses within a system or network. [0060] It may be understood that the present invention as described above may be implemented in the form of control logic using computer software in a modular or integrated manner. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art may know and appreciate other ways and/or methods to implement the present invention using hardware, software, or a combination of hardware and software.

[0061] The above description is illustrative and is not restrictive. Many variations of embodiments may become apparent to those skilled in the art upon review of the disclosure. The scope embodiments should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the pending claims along with their full scope or equivalents.

[0062] One or more features from any embodiment may be combined with one or more features of any other embodiment without departing from the scope embodiments. A recitation of "a", "an" or "the" is intended to mean "one or more" unless specifically indicated to the contrary. Recitation of "and/or" is intended to represent the most inclusive sense of the term unless specifically indicated to the contrary.

[0063] One or more of the elements of the present system may be claimed as means for accomplishing a particular function. Where such means-plus- function elements are used to describe certain elements of a claimed system it may be understood by those of ordinary skill in the art having the present specification, figures and claims before them, that the corresponding structure includes a computer, processor, or microprocessor (as the case may be) programmed to perform the particularly recited function using functionality found in a computer after special programming and/or by implementing one or more algorithms to achieve the recited functionality as recited in the claims or steps described above. As would be understood by those of ordinary skill in the art that algorithm may be expressed within this disclosure as a mathematical formula, a flow chart, a narrative, and/or in any other manner that provides sufficient structure for those of ordinary skill in the art to implement the recited process and its equivalents. [0064] While the present disclosure may be embodied in many different forms, the drawings and discussion are presented with the understanding that the present disclosure is an exemplification of the principles of one or more inventions and is not intended to limit any one embodiments to the embodiments illustrated.

[0065] The present disclosure provides a solution to the long-felt need described above. In particular, the systems and methods overcome challenges of indoor navigation where a desire for rapid response and accuracy is restrained by the indoor structural existence and radio interferences.

[0066] Further advantages and modifications of the above described system and method may readily occur to those skilled in the art.

[0067] The disclosure, in its broader aspects, is therefore not limited to the specific details, representative system and methods, and illustrative examples shown and described above. Various modifications and variations may be made to the above specification without departing from the scope or spirit of the present disclosure, and it is intended that the present disclosure covers all such modifications and variations provided they come within the scope of the following claims and their equivalents.