Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
REMOTE PLACEMENT OF DIGITAL CONTENT TO FACILITATE AUGMENTED REALITY SYSTEM
Document Type and Number:
WIPO Patent Application WO/2018/094289
Kind Code:
A1
Abstract:
A method is provided for facilitating the placement of digital content in an augmented reality (AR) system. The method includes providing mapping software which, in response to user input including a specified location, displays at least one of a series of frames related to that location, wherein each frame contains a unique view of the location; receiving, from a computational device equipped with a display, input which specifies a location; using the mapping software to generate a set of frames associated with the input location; ghosting digital content over at least one of the frames in the set, thereby generating at least one ghosted frame; and displaying the at least one ghosted frame on the display of the computational device.

Inventors:
GAUGLITZ WOLFRAM (US)
FORTKORT JOHN (US)
Application Number:
PCT/US2017/062426
Publication Date:
May 24, 2018
Filing Date:
November 17, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PICPOCKET INC (US)
International Classes:
G09G5/00; G06T19/00
Foreign References:
US20140063061A12014-03-06
US20160292850A12016-10-06
US20160217624A12016-07-28
Attorney, Agent or Firm:
FORTKORT, John (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS :

1. A method for viewing digital content on a computational device equipped with a display and utilizing an augmented reality (AR) system, the method comprising:

obtaining, from mapping software, an image of a particular view associated with a geographic location;

obtaining, from an augmented reality system, digital content corresponding to the particular view of the geographic location; and

displaying, on the display of the computational device, the image with the digital content superimposed over it.

2. The method of claim 1, wherein the mapping software displays a street view of the location.

3. The method of claim 1, wherein the AR system is a marker-based system.

4. The method of claim 1, wherein the digital content is an image.

5. The method of claim 1, wherein the digital content is a video.

6. The method of claim 1, wherein the digital content is an audio file.

7. The method of claim 1, wherein the digital content is an animated character.

8. The method of claim 1, wherein the digital content is a licensed digital property.

9. The method of claim 1, wherein the digital content is an item selected from the group consisting of products, brand, logos, promotions, advertisements and coupons.

10. The method of claim 1, further comprising:

receiving, from a computational device equipped with a display, viewpoint data which specifies a perspective view from a geographic location, wherein said view has augmented reality content associated with it.

11. A method for placing digital content in an augmented reality (AR) system, comprising:

obtaining, from mapping software, an image of a particular view associated with a geographic location;

receiving digital content;

receiving placement data associated with the digital content, wherein said placement data indicates the prescribed location and orientation of the digital content at the perspective view; and

displaying, on the display of the computational device, the image with the digital content superimposed over it at the prescribed location and orientation specified in the placement data.

12. The method of claim 11, wherein the placement data further indicates the size of said digital content relative to the perspective view over which the digital content is

superimposed.

13. The method of claim 11, wherein said placement data is input by a user.

14. The method of claim 11, wherein said placement data is generated by a rules engine.

15. The method of claim 11, wherein said placement data is generated by a rules engine and/or logic.

16. The method of claim 15, wherein said rules engine and/or logic is scene and/or object dependent.

17. The method of claim 11, wherein the placement data is received from a user via a graphical user interface, and wherein, in response to user input, said graphical user interface (a) changes the location and orientation of the augmented reality content, and (b) updates the placement data associated with the augmented reality content.

18. The method of claim 11, wherein the digital content is an image.

19. The method of claim 11, wherein the digital content is a video.

110. The method of claim 11, wherein the digital content is an animated character.

21. The method of claim 11, wherein the digital content is a licensed digital property.

22. The method of claim 11, wherein the digital content is an item selected from the group consisting of products, brand, logos, promotions, advertisements and coupons.

23. The method of claim 11, further comprising:

receiving, from a computational device equipped with a display, viewpoint data which specifies a perspective view from a geographic location, wherein said view has augmented reality content associated with it;

24. The method of claim 11, wherein said placement data also indicates the prescribed size of the digital content at the perspective view.

25. The method of claim 24, wherein displaying, on the display of the computational device, the image with the digital content superimposed over it includes displaying the image at the prescribed size.

26. A system for generating content for an augmented reality system, comprising:

a mapping solution selected from the group consisting of mapping software, mapping applications and mapping services;

a browser which accesses said mapping solution and which renders perspective views of a location therefrom;

a content placement module which (a) superimposes digital content on a perspective view rendered by the browser from said mapping solution, thereby yielding a composite image, and (b) capturing information from the composite image, wherein the extracted information specifies the location of the superimposed digital content relative to the perspective view; and

an augmented reality system which receives composite images generated by the content placement module and which ghosts the corresponding digital content over a real- world view corresponding to the perspective view.

27. A system for generating content for an augmented reality system, comprising:

a mapping solution selected from the group consisting of mapping software, mapping applications and mapping services;

a browser which accesses said mapping solution and which renders perspective views of a location therefrom;

a content placement module which (a) superimposes digital content on a perspective view rendered by the browser from said mapping solution, thereby yielding a composite image, and (b) capturing information from the composite image which specifies (i) the location and orientation of the perspective view, and (ii) the orientation of the digital content with respect to the perspective view; and

an augmented reality system which receives the captured information and which ghosts the digital content over a real-world view corresponding to the perspective view.

28. The system of claim 27, wherein the content placement module is a browser extension or plug-in for said browser.

29. The system of claim 27, wherein the perspective view is a street view.

30. The system of claim 27, wherein the information extracted from the composite image by said content placement module includes information selected from the group consisting of the size, orientation and placement of the digital content with respect to the perspective view rendered by the browser.

31. The system of claim 27, wherein the information extracted from the composite image by said content placement module includes markers associated with the perspective view rendered by the browser.

32. The system of claim 27, wherein the information extracted from the composite image by said content placement module includes information selected from the group consisting of the orientation, yaw, pitch, roll and altitude of the perspective view rendered by the browser.

33. The system of claim 27, wherein the information extracted from the composite image by said content placement module includes GPS coordinates associated with the perspective view rendered by the browser.

34. A system for generating content for an augmented reality system, comprising:

a mapping solution selected from the group consisting of mapping software, mapping applications and mapping services;

an extraction module which accesses said mapping solution and which extracts perspective views of locations therefrom;

a content placement module which (a) receives extracted perspective views from said extraction module, (b) superimposes digital content on said extracted perspective views, thereby yielding composite images, and (b) extracts information from the composite images, wherein the extracted information extracted from each composite image specifies the location of the corresponding superimposed digital content relative to the corresponding perspective view; and

an augmented reality system which receives composite images generated by the content placement module and which, for each received composite image, ghosts the digital content of the composite image over the real-world view corresponding to the perspective view of the composite image.

35. The system of claim 34, wherein the information extracted from the composite image by said content placement module includes information selected from the group consisting of the size, orientation and placement of the digital content with respect to the perspective view of the composite image.

36. The system of claim 34, wherein the information extracted from the composite image by said content placement module includes markers associated with the perspective view of the composite image.

37. The system of claim 34, wherein the information extracted from the composite image by said content placement module includes information selected from the group consisting of the orientation, yaw, pitch, roll and altitude of the perspective view of the composite image.

38. The system of claim 34, wherein the information extracted from the composite image by said content placement module includes GPS coordinates associated with the perspective view of the composite image.

39. An augmented reality system which serves up digital content to a viewer installed on a host device, the system comprising:

a host device equipped with location awareness and orientation awareness functionalities;

a database of composite images, wherein each composite image in the database includes a perspective view of a real-world location and digital content to be ghosted over the perspective view, and wherein each composite image has embedded therein parameters which describe the real-world location corresponding to the perspective view and the orientation required for a device to assume relative to the real-world location in order to recall the digital content; and

an augmented reality viewer which is installed on said host device and which is in communication with said database, wherein said augmented reality viewer monitors the location and orientation of the host device and, upon determining that the host device is at a location and orientation corresponding to a perspective view of an image in said database of composite images, serves up the corresponding digital content.

40. The augmented reality system of claim 39, wherein said augmented reality viewer serves up the corresponding digital content by ghosting the digital content over the corresponding view of the real-world location.

41. The augmented reality system of claim 39, wherein the parameters are specified in metadata embedded in the composite image, and wherein said augmented reality viewer serves up said corresponding digital content by interpreting the metadata.

42. The augmented reality system of claim 39, wherein said augmented reality viewer interprets the metadata without reference to any database or repository linking the embedded parameters to the digital content.

43. A tangible, non-transient, computer readable medium having programming instructions recorded therein which, when executed by at least one computer processor, implement the methods or systems of any of the previous claims.

Description:
REMOTE PLACEMENT OF DIGITAL CONTENT TO FACILITATE AUGMENTED REALITY SYSTEM

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims the benefit of priority from U.S. patent application number 62/423,720, filed November 17, 2016, having the same title, and the same inventors, and which is incorporated herein by reference in its entirety.

FIELD OF THE DISCLOSURE

[0002] The present disclosure relates generally to augmented reality systems, and more specifically to systems, methods and applications which facilitate the remote association, storage and retrieval of digital media at a specific geographic location within an augmented reality system.

BACKGROUND OF THE DISCLOSURE

[0003] Various augmented reality systems and methodologies are known to the art. For example, U.S. 8,963,957 (Skarulis), entitled "SYSTEMS AND METHODS FOR AN AUGMENTED REALITY PLATFORM", discloses a 2-step process by which augmented (AR) content is created and displayed to a user using AR software disposed on a client device. The client device is configured to receive information from sensors embedded therein, including a GPS, an accelerometer, and a compass.

[0004] In the first step of the Skarulis process, AR content is created by aligning an instance of media with a view of reality through the use of the client device. A marker, which represents at least a portion of the view of reality which is related to the medium, is then generated. Metadata is generated for the medium and marker using data from the sensors of the client device. This metadata includes the geographical location, yaw, pitch, and roll of the client device at the time of marker creation, and also includes the relationship between the marker and the medium. The medium, marker and metadata is then sent to a data repository.

[0005] The second step of the Skarulis process occurs when a user of a client device (which may be the same as, or different from, the device used to create the AR content) uses the device to view a view of reality which has AR content associated with it. When this occurs, the AR software matches a marker associated with the AR content to the view of reality, as by matching one or more pattems in the marker to one or more patterns in the view of reality. The AR software then overlays the associated medium over the view of reality based on the relationship between the medium and the marker, and displays the resulting augmented view of reality. In order to assist a user of the client device in finding nearby AR content, the AR software may use arrows or other features to instruct the user on how to position the device so that the AR content may be observed.

[0006] More recently, various augmented reality mobile programs and games have been introduced. Examples include the augmented reality mobile games Ingress and Pokemon Go, which are produced by Niantic, Inc.

[0007] Various mapping and navigational software programs and services are also known to the art. For example, Google Maps is a mapping service developed by Google which offers street maps, 360° panoramic street level views (available through its "Street View" feature), real-time traffic conditions and route planning capabilities. Apple Maps, Mapquest, and various other products and services offer similar features and capabilities.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] FIG. 1 is an example of a Google Street View URL which captures the address, GPS coordinates and other relevant parameters which may be used to describe a perspective view.

[0009] FIG. 2 is an example of a Google Street View image described by said URL.

SUMMARY OF THE DISCLOSURE

[0010] In one aspect, a method is provided for viewing digital content in an augmented reality (AR) system. The method comprises (a) receiving, from a computational device equipped with a display, viewpoint data which specifies a perspective view from a geographic location, wherein said view has augmented reality content associated with it; (b) obtaining, from mapping software, an image of a particular view associated with the geographic location; (c) obtaining, from an augmented reality system, digital content corresponding to the particular view of the geographic location; and (d) displaying, on the display of the computational device, the image with the digital content superimposed over it. [0011] In another aspect, a method is provided for placing digital content in an augmented reality (AR) system. The method comprises (a) receiving, from a computational device equipped with a display, viewpoint data which specifies a perspective view from a geographic location, wherein said view has augmented reality content associated with it; (b) obtaining, from mapping software, an image of a particular view associated with the geographic location; (c) receiving digital content and associated placement data, wherein said placement data indicates the prescribed location and orientation of the digital content at the perspective view; and (d) displaying, on the display of the computational device, the image with the digital content superimposed over it at the prescribed location and orientation specified in the placement data.

[0012] In a further aspect, a method is provided for determining the placement of digital content in an augmented reality (AR) system. The method comprises (a) providing mapping software which displays at least one of a series of frames related to a real-world location, wherein each frame contains a unique view of the location; (b) providing ghosting software which ghosts digital content onto a view of a real world location which is observed by a user; (c) receiving, from a computational device equipped with a display, input which specifies the location and orientation of the computational device; (d) in response to the received location and orientation of the computational device, using the ghosting software to superimpose, over a real-world view on the display of the computational device which corresponds to the received location and orientation, a ghosted version of digital content corresponding to the received location and orientation; (e) identifying a frame in the mapping software which corresponds to the real-world view; (f) using the identified frame to determine parameters used to superimpose the ghosted content on the real-world view; and (g) displaying the determined parameters.

[0013] In still another aspect, a system is provided for augmenting a view of reality. The system comprises (a) a client perspective module stored on a client device, the client device comprising at least one processor, the client perspective module configured to, when executed by the at least one processor, superimpose a first medium over a first view of reality, receive one or more of a change in transparency of the superimposed first medium, a change in size of the superimposed first medium, and a change in position of the

superimposed first medium, generate a first marker, the first marker comprising at least a portion of the first view of reality, generate first metadata related to at least one of the first medium and the first marker, and send the first medium, the first marker, and the first metadata to a depository; (b) a client viewer module stored on the client device, the client viewer module configured to, when executed by the at least one processor, receive a second medium, a second marker, and second metadata from the depository, match the second marker to at least a portion of a second view of reality, and superimpose the second medium over the at least a portion of the second view of reality to generate an augmented view of reality; (c) a mapping module which, in response to user input on the client device which includes a specified location, displays at least one of a series of frames related to that location, wherein each frame contains a unique view of the location; (d) a matching module which matches the augmented view of reality generated by the client viewer module to a corresponding frame in the mapping module; and (e) a display module which displays, on a display associated with the client device, a frame from the mapping module with the second medium superimposed thereon.

[0014] In yet another aspect, a system is provided for generating content for an augmented reality system. The system comprises (a) a mapping solution selected from the group consisting of mapping software, mapping applications and mapping services; (b) a browser which accesses said mapping solution and which renders perspective views of a location therefrom; (c) a content placement module which (a) superimposes digital content on a perspective view rendered by the browser from said mapping solution, thereby yielding a composite image, and (b) extracts information from the composite image, wherein the extracted information specifies the location of the superimposed digital content relative to the perspective view; and (d) an augmented reality system which receives composite images generated by the content placement module and which ghosts the corresponding digital content over a real-world view corresponding to the perspective view.

[0015] In another aspect, a system is provided for generating content for an augmented reality system. The system comprises (a) a mapping solution selected from the group consisting of mapping software, mapping applications and mapping services; (b) an extraction module which accesses said mapping solution and which extracts perspective views of locations therefrom; (c) a content placement module which (a) receives extracted perspective views from said extraction module, (b) superimposes digital content on said extracted perspective views, thereby yielding composite images, and (b) extracts information from the composite images, wherein the extracted information extracted from each composite image specifies the location of the corresponding superimposed digital content relative to the corresponding perspective view; and (d) an augmented reality system which receives composite images generated by the content placement module and which, for each received composite image, ghosts the digital content of the composite image over the real-world view corresponding to the perspective view of the composite image.

[0016] In another aspect, an augmented reality system is provided which serves up digital content to a viewer installed on a host device. The system comprises (a) a host device equipped with location awareness and orientation awareness functionalities; (b) a database of composite images, wherein each composite image in the database includes a perspective view of a real-world location and digital content to be ghosted over the perspective view, and wherein each composite image has embedded therein parameters which describe the real- world location corresponding to the perspective view and the orientation required for a device to assume relative to the real-world location in order to recall the digital content; and (c) an augmented reality viewer which is installed on said host device and which is in

communication with said database, wherein said augmented reality viewer monitors the location and orientation of the host device and, upon determining that the host device is at a location and orientation corresponding to a perspective view of an image in said database of composite images, serves up the corresponding digital content.

[0017] In still another aspect, a tangible, non-transient, computer readable medium is provided having programming instructions recorded therein which, when executed by at least one computer processor, implement any of the foregoing systems or methodologies.

DETAILED DESCRIPTION

[0018] While existing augmented reality (AR) product offerings may have certain desirable features, further improvements are needed in the art. For example, in some of these offerings, ambiguities may exist with respect to content placement. This may result in suboptimal superimposition of AR content on a real-world background. For example, it may be desired to ghost an image of Tiger Woods swinging a golf club in proximity to a water tower designed to resemble a golf ball on a tee. However, the effect is ruined if the ghosted image depicts Tiger Woods at an unnatural orientation relative to the water tower.

[0019] Other AR offerings lack a convenient means for correction of the location and orientation of AR content. Due to placement errors, the movement of real world objects, or natural phenomena, the placement of AR content may require correction from time to time. Frequently, such corrections may have to be implemented by parties other than the original content creator or provider. However, such corrections are often complicated by the fact that it may not be apparent to the party making the corrections what the original or intended placement and orientation of the AR content was.

[0020] Still other AR offerings lack the means by which an author of AR content, or an authorized third party, may conveniently place that content from a remote location. This shortcoming impedes the implementation of AR technologies by adding a hurdle to such systems, namely, the need to physically visit a location in order to associate AR content with it. Moreover, this shortcoming creates a situation in which less visited venues, such as those that are more remote from population centers, tend to be underserved by AR content creators and providers. Similarly, existing AR offerings may provide insufficient incentive to others to visit such remote locations (as may be desirable, for example, to drive foot traffic to local businesses), provide content for these locations, or consume content that has already been provided.

[0021] Some or all of these and other infirmities may be addressed by the systems and methodologies described herein.

[0022] In one aspect, a method is provided which utilizes mapping software (such as, for example, Google Street View), images and/or stock photos to facilitate the remote placement of digital content in a (preferably marker-based) augmented reality (AR) system. For example, a particular instance of digital content may be associated with a particular view in mapping software such as, for example, a frame from the 'street view' mode of Google Maps. When a user views that particular frame on an associated device (such as, for example, a mobile handheld device such as a mobile phone, or on a desktop, laptop or tablet computer), the content may be overlaid or "ghosted" onto that same background image on the user's device for content placement purposes, thus imitating the view that is visible when the user is actually at that very location.

[0023] Any relevant information which is required to explicitly designate the location of the "ghosted" content in its real -world surroundings. Such information may include, for example, one or more of the GPS coordinates of the location, the device's orientation, handset yaw, pitch and/or roll, altitude, or other such parameters. In some cases, this information may be sourced from mapping software or a related service (such as, for example, Google Street View) and/or may be estimated given the first-person perspective of the viewer (as, for example, in the case of the device's expected yaw, pitch and roll). This approach allows a user at the intended real-world location or address to recall the image using an identifying frame (which is representative of how and where the image was to be ghosted), along with the relevant metadata which was used to remotely ghost the image. Such metadata may include, for example, the object's designated GPS coordinates and the handset's intended orientation, yaw, pitch, roll, elevation and altitude, or any other such parameters as may be useful or necessary to implement the systems and methodologies described herein.

[0024] In some embodiments, the views created by the underlying mapping software may be used as a reference from which object recognition software may identify precisely how AR content was (or is) "ghosted" at that particular location. For example, these views may be utilized to discern the size and orientation of the ghosted content relative to the background image. The AR content may be viewed through, for example, camera software or a view finder installed on the user's device, and may appear only when a user running the software/application/viewer meets some or all of the criteria (although preferably not any co- location criteria) that determine whether or not an image should be visible to the user (see, e.g., U.S. 8,963,957 (Skarulis), entitled "SYSTEMS AND METHODS FOR AN

AUGMENTED REALITY PLATFORM", which is incorporated herein by reference in its entirety). The view created by the underlying mapping software serves as a substitute to actually being on location to place ("ghost") the content by using the very same information that is generally available to users of the viewer when on location. Mapping software (such as, for example, Google's Street View software) will have the GPS coordinates and other parameters associated with a particular view (such as, for example, the yaw, pitch, roll, altitude and compass orientation of the view) already documented or evident.

[0025] The systems and methodologies disclosed herein may be extended to other image sources through suitable modifications. Thus, for example, these systems and methodologies may utilize 3 rd party images (such as, for example, stock photos) as references for which object recognition software may identify precisely how AR content was (or is) "ghosted" at a particular location. In some cases, the images provided may not have all of the necessary information (such as, for example, the yaw, pitch, roll and compass orientation of the view) captured and associated within their metadata as may be required to discern the size and orientation of the ghosted content relative to the background image. However, in such cases, this information may be added for POIs (points of interest), landmarks or general locations where the supporting information is already known or may be readily divined.

[0026] In some embodiments, object recognition software may be utilized to establish, at least to some degree of approximation, the approximate GPS point of origin for an image or other image parameters. Acceptable metadata may then be written directly to the image itself (for example, by the addition to, or the modification of, existing metadata), or the image may be manipulated (preferably in a visually unperceivable manner) through the use of steganography to associate all of the necessary metadata with the image.

[0027] In another aspect, a method is provided to facilitate content consumption in a (preferably marker-based) AR system. The method incentivizes users to capture or consume digital content (which may be, for example, photos, images, audio or video files, digital collectibles, documents, records, constructs, characters, trading cards, or other types of media, and which may include instances of licensed content) through notification schemes and constrained availability. For example, the method may be utilized to drive users to a particular geographic location by offering a limited set or quantity (N) of digital media which may be consumed or captured at that location. By way of example, a promotor of a business, event or location may offer five (5) Pokemon-style characters or ten (10) virtual trading cards or licensed digital properties (which may include, for example, Disney characters or images from National Geographic) which may be captured or collected there. As each instance of digital content is collected, it is depleted from the available pool, so that no more instances of that particular content exist after the first quantity Nhave been captured or consumed. This method may be used, for example, to drive customers to nearby business establishments, or to shape the time and flow of consumer foot traffic to the same.

[0028] In a further aspect, a method is provided to facilitate crowdsourced content placement in a (preferably marker-based) AR system. The method incentivizes users to place AR content or media (which may be any of the types of digital content or media disclosed herein) in a particular location or geographic region through notification schemes and the constrained availability of the content or media. For example, the method may be utilized to drive users to place AR content of a specified type, and at a specified location or within a specified geographic region or geofence, by offering digital content or media to only the first N users who place AR content or media at the target location. This method may be used, for example, to incentivize users to create AR content in locations or geographic regions which might otherwise be underserved as a result of, for example, lower population densities.

[0029] It will be appreciated that the various systems and methodologies disclosed herein may be implemented through the use of one or more software applications. Such software applications may be embodied in the form of tangible, non-transient, computer readable media having programming instructions recorded therein which, when executed by at least one computer processor, implement any of the foregoing systems or methodologies.

Moreover, such software may be server based, client based, or various combinations of the two. As a specific, non-limiting example, the software may be implemented as a front end consisting of a distributed application or client, and a back end implemented as a server- based application.

[0030] In some embodiments of the systems and methodologies disclosed herein, a plug- in to the mapping software may be utilized to permit the visual overlay of a (preferably isometric) shape. This shape may be further fashioned in the x, y and z directions to encompass any sized or shaped slice to coincide with an area (or areas) of interest. The slice may be a semi-transparent visual overlay which shows how the slice intercepts the target. Such a visual overlay may be utilized to place an image within a volumetric geofence (herein referred to as a "geospace") for the purposes of implementing a ghosting or AR algorithm in the systems and methodologies disclosed herein.

[0031] In some embodiments of the systems and methodologies disclosed herein, mapping software may be provided in which each view of reality (for example, each view in the mapping software) is equipped with (or associated with) at least one marker, and in which at least one set of metadata is associated with the at least one marker. This mapping software may then be utilized to implement a variety of AR or ghosting schemes, including those disclosed herein.

[0032] For example, in some embodiments, the mapping software may run in the background, and the resources of the host device may be utilized to continually track the location and orientation of the host device. Such resources may include, for example, location awareness functionalities (such as, for example, the ability to determine the location of the device by ascertaining its GPS coordinates, through triangulation of cell towers or markers, or by reference to RF or magnetic maps), orientation determining means, accelerometers, compasses, and the like. Viewer software on the host device may then ascertain when the location and orientation of the host device coincides with a frame in the mapping software, and may serve up AR content associated with that frame (or associated with the metadata that is associated with that frame).

[0033] Moreover, in some embodiments, object recognition may be utilized to compare a marker in such mapping software to a depiction of the marker in an instance of digital media (such as, for example, a stock photo). This may allow appropriate metadata for the digital media to be ascertained. [0034] In some embodiments of the systems and methodologies disclosed herein, a browser plug-in or extension is provided that may be used with mapping software, application or services (hereinafter referred to collectively as "mapping solutions") of the type disclosed herein such that digital content may be directly superimposed on a perspective view in a tab where the mapping solution is running. The resulting perspective view with the superimposed digital content may extract any and all available, relevant information from the mapping solution (such as, for example, size, orientation and placement of digital content relative to the street view and markers associated with same, as well as the GPS coordinates of the street view and the orientation, yaw, pitch, roll and altitude/elevation of same) for purposes of remotely "ghosting" (placing) the digital content.

[0035] In some embodiments of the systems and methodologies disclosed herein, a standalone software application is provided which extracts and then imports a relevant perspective view from a mapping solution along with relevant, available parameters which describe view's GPS coordinates, orientation, yaw, pitch, roll and altitude/elevation information, into the tool in order to mimic what a mobile device would capture if the "ghosting" (or placement) of digital content was being carried out, in-person, at the actual location itself. The placement, size and orientation of any superimposed digital content (whether selected by an individual user or determined through the application of a rules engine and/or logic) relative to the perspective view would also be captured and described. All the aforementioned parameters may be used within the standalone tool for purposes of communicating and sharing the same with the "ghosting" service with regard to how said digital content is to be stored and/or recalled.

[0036] In some embodiments of the systems and methodologies disclosed herein, all of the relevant information used to describe how digital content is superimposed within a specific perspective view, as well as any and all relevant parameters needed to describe both the location as well as the orientation of the user and/or the handset or mobile device relative to the perspective view and needed to recall the digital content, may all be embedded within the metadata of the digital content itself. An augmented reality viewer may recall (view) the digital content simply by interpreting its metadata, thus allowing the digital content (object) to exist independently - without a database or repository to link the information with the digital content. [0037] The above description of the present invention is illustrative, and is not intended to be limiting. It will thus be appreciated that various additions, substitutions and modifications may be made to the above described embodiments without departing from the scope of the present invention. Accordingly, the scope of the present invention should be construed in reference to the appended claims.