Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHODS FOR PROVIDING TIME AND SPACE ALTERED VIEW IN AUGMENTED REALITY AND VIRTUAL REALITY ENVIRONMENTS
Document Type and Number:
WIPO Patent Application WO/2020/076826
Kind Code:
A1
Abstract:
The invention disclosed herein provides systems and methods for facilitating virtual reality (VR), augmented reality (AR), or virtual augmented reality (VAR) based communication and collaboration through a streamlined user interface that enables both synchronous and asynchronous interactions in immersive environments.

Inventors:
CLOUTIER PRIYA (US)
DEELSTRA URSULA (US)
HOUSE SEAN (US)
MCMULLEN MICHELE (US)
SZOFRAN ADAM (US)
Application Number:
PCT/US2019/055192
Publication Date:
April 16, 2020
Filing Date:
October 08, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
30 60 90 CORP (US)
International Classes:
G06V20/20
Domestic Patent References:
WO2018152742A12018-08-30
Foreign References:
US20180137685A12018-05-17
US20170249745A12017-08-31
US20120249586A12012-10-04
US20120113274A12012-05-10
Attorney, Agent or Firm:
CLOUTIER, Priya (US)
Download PDF:
Claims:
CLAIMS

1. An interface that allows a first user to communicate variance of a three-dimensional data set or multi-dimensional environment over a period of time to a second user comprising:

(a) the first user records, captures, models or a combination thereof the three-dimensional data set or multi-dimensional environment, from a point of view, at a first time;

(b) the first user records, captures, models or a combination thereof the three-dimensional data set or multi-dimensional environment, from a point of view, at a second time;

(c) the first user aligns the record, capture, model or a combination thereof the three- dimensional data set or multi-dimensional environment at the first time with the record, capture, model or a combination thereof the three-dimensional data set or multi-dimensional environment at the second time;

(d) registering to a unique identifier: three-dimensional data set or multi-dimensional environment, from a point of view, at a first time; three-dimensional data set or multi dimensional environment, from a point of view, at second time; aligned three- dimensional data set or multi-dimensional environment;

(e) operably attach a fiducial marker in the approximate physical location that is

represented by the three-dimensional data set or multi-dimensional environment;

(f) the second user, interacting in an immersive or non-immersive environment using a mobile phone or other tethered or untethered augmented reality or virtual reality hardware, discovers the fiduciary marker; the mobile phone or other tethered or untethered augmented reality or virtual reality hardware recognizes a unique identifier noted on the fiduciary marker; (g) the second user is provided with an indication that allows the second user to align herself with the physical space in such a way to interact with the aligned three- dimensional data set or multi-dimensional environment.

2. The interface of claim 1 whereby the second user interacts with aligned three- dimensional data set or multi-dimensional environment, in an immersive or non- immersive environment.

3. The interface of claim 2 whereby the second user may annotate the aligned three- dimensional data set or multi-dimensional environment, in an immersive or non- immersive environment.

4. The interface of claim 1 whereby the fiduciary marker does not have the same point of view as the aligned images.

5. The interface of claim 1 whereby the three-dimensional data set or multi-dimensional environment, from a point of view, at a first time and the three-dimensional data set or multi-dimensional environment, from a point of view, at a second time are optically aligned using geometric elements of the three-dimensional data set or multi-dimensional environment, using a compass heading, along an axis, or a combination thereof.

6. The interface of claim 1 whereby the indication is approximately orthogonal to the

fiducial marker.

Description:
Systems and Method for Providing Time and Space Altered View in Augmented Reality and Virtual Reality Environments

CROSS-REFERENCES TO RELATED APPLICATIONS This application claims priority US Patent Applications 15/669,711, filed on August 4, 2017, and 15/216,981, filed on July 22, 2016; and US Provisional Application 62/742,926, filed October 8,

2018.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR

DEVELOPMENT

Not Applicable

INCORPORATION-BY-REFERENCE OF MATERIAL SUBMITTED ON A COMPACT

DISC

Not Applicable

BACKGROUND

[0001] Provided herein is the disclosure for an interface that allows users to visualize or otherwise communicate the manner in which three-dimensional data sets or multi-dimensional environments vary or change over more than one period.

BRIEF DESCRIPTION OF INVENTION

[0002] An objective of this invention is to provide users to communicate changing three- dimensional data sets or multi-dimensional environments vary or change.

DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS

[0003] Other features and advantages of the present invention will become apparent in the following detailed descriptions of the preferred embodiment with reference to the accompanying drawings, of which: [0004] Fig. 1A is an exemplary stereoscopic view of three-dimensional data sets or multi- dimensional environments;

[0005] Fig. 1B is an exemplary stereoscopic three-dimensional data sets or multi

dimensional environments;

[0006] Fig. 2A shows an exemplary fiduciary marker having an indicator;

[0007] Fig. 2B shows an exemplary fiduciary marker having an indicator being placed in the physical world;

[0008] Fig. 3 shows a user interacting with three-dimensional data sets or multi

dimensional environments;

[0009] Fig. 4 shows a user interacting with three-dimensional data sets or multi

dimensional environments;

[00010] Fig. 5 shows an exemplary fiduciary marker.

DETAILED DESCRIPTION

[00011] In the following detailed description, reference is made to the accompanying drawings, which form a part hereof. In the drawings, the use of similar or the same symbols in different drawings typically indicates similar or identical items, unless context dictates otherwise.

[00012] The illustrative embodiments described in the detailed description, drawings, and claims are not meant to be limiting. Other embodiments may be utilized, and other changes may be made, without departing from the spirit or scope of the subject matter presented here.

[00013] One skilled in the art will recognize that the herein described components (e.g., operations), devices, objects, and the discussion accompanying them are used as examples for the sake of conceptual clarity and that various configuration modifications are contemplated. Consequently, as used herein, the specific exemplars set forth and the accompanying discussion are intended to be representative of the more general classes. In general, use of any specific exemplar is intended to be representative of its class, and the non-inclusion of specific components (e.g., operations), devices, and objects should not be taken as limiting.

[00014] The present application uses formal outline headings for clarity of presentation. However, it is to be understood that the outline headings are for presentation purposes, and that different types of subject matter may be discussed throughout the application (e.g.,

device(s)/structure(s) may be described under process(es)/operations heading(s) and/or process(es)/operations may be discussed under structure(s)/process(es) headings; and/or descriptions of single topics may span two or more topic headings). Hence, the use of the formal outline headings is not intended to be in any way limiting.

[00015] To reduce potential confusion, the following glossary provides general definitions of several frequently used terms within these specifications and claims with a view toward aiding in the comprehension of such terms. The definitions that follow should be regarded as providing accurate, but not exhaustive, meanings of the terms. Italicized words represent terms that are defined elsewhere in the glossary.

[00016] Image is a three-dimensional environment or multi- dimensional data set that may be represented as a photosphere or image captured by light field technology. Image may represent a captured, recorded, or modeled real world object or scene.

[00017] Point of View is a locus, or vantage point, that represents a location in space which is visible to a user.

[00018] Hotspot is a point within an image with which a user may interact. A hotspot may allow a user to view multiple aspects of a scene and/or respond to a survey. [00019] Teleporter is a point within a scene that allows a user to navigate to another scene or another location within the same scene.

[00020] Description may be text, sound, image, or other descriptive information.

[00021] Meeting is defined as more than one user interacting with three-dimensional environment or multi- dimensional data set on an immersive platform. In some cases, the three- dimensional environment or multi- dimensional data set may be annotated.

[00022] Ink means to draw a visual path or region.

[00023] Given by way of overview, illustrative embodiments of an interface that allows users in augmented reality and virtual reality systems to communicate and collaborate both synchronously and asynchronously are provided. More specifically, the interface allows users to visualize or otherwise communicate the manner in which three-dimensional data sets or multi dimensional environments vary or change over more than one period.

[00024] A three-dimensional data set or multi- dimensional environment (or“Image”) (10) may be recorded or captured using computational photography techniques. For example, an Image (10) may be captured or recorded as a photosphere using a computational photographic system such as Google® Cardboard Camera Application on a mobile phone or by using a purpose-built device, such as stereoscopic, or spherical, capture hardware, with one or more conventional or fisheye lenses. Another computational method to record or capture an Image includes the use of light field camera technologies. Alternatively, Images (10) can be modeled or generated on a computing system. For example, an Image (10) may be generated using programs, codes, or applications employing virtual geometry or digital world model.

[00025] Once an Image (10) is recorded, captured or modeled it may be annotated as described in US Patent Applications 15/669,711, filed on August 4, 2017, and 15/216,981, filed on July 22, 2016, commonly owned by the current Applicant, and incorporated by reference herein, in their entirety. As described in more detail in those applications, annotation can include voice over annotation, providing a description, inking, or a combination thereof.

[00026] Referring to Figs. 1A and 1B, a user may capture, record, or model an Image (10) in one location (11) during more than one period. For example, one period can be a time when a building or room has been built only to its beams and supports. A second period may be when the same building or room has been finished. Period, as used herein, can be of any length of time. For example, a period can be seconds, hours, days, months, or years.

[00027] Each Image (10) recorded in a location (11) over more than one period is aligned with other Images (10) taken at a different period from the same or approximately the same point of view. In one embodiment, Images (10) are aligned manually. In another embodiment, Images (10) are optically aligned using geometric elements or other elements of the Images (10). In one embodiment, Images (10) are aligned using compass headings from the moment of capture. In another embodiment, Images (10) are aligned along an x-, y-, or z-axis.

[00028] Each aligned Image may be annotated at the time it was captured, recorded or modeled. Referring to Fig. 2A, each aligned Image (10) with its annotations are registered in a database and aligned to an alpha-numeric or other descriptor (21) that would allow for easy recall of the aligned Images (10).

[00029] Referring to Fig. 2B, a fiducial marker (20) is provided in the location (11) where the aligned Images (10) were recorded, captured, or modeled. Each fiducial marker (20) provides a unique descriptor (21) that allows a user to interact, in an immersive or non- immersive environment, with aligned Images (10) for that location (11). In an embodiment, the fiducial marker (20) does not have the same point of view as the aligned Images (10). In an embodiment, the fiducial marker (20) does have the same point of view as the aligned Images (10). The fiducial marker (20) is in the same location (11) the aligned Images (10) are captured, recorded, or modeled.

[00030] For example, referring to Figs. 3 and 4, a user, interacting in an immersive or non- immersive environment using a mobile phone, or other tethered or untethered augmented reality or virtual reality hardware, recognizes a descriptor (21) at a location (11). Preferably, the tethered or untethered augmented reality or virtual reality hardware deploys optical character recognition and image matching software to read the descriptor (21). Alternatively, the user may input the descriptor (21), via any usual manual input device or software aligned with the user’s tethered or untethered augmented reality or virtual reality hardware.

[00031] Referring to Fig. 5, in one particular embodiment, the fiducial marker (20) is comprised of at least two matrices (20a, 20b) where each matrix contains at least a portion of a descriptor (21). In one embodiment, a matrix (20a, 20b) is a n x m matrix. In one embodiment, a matrix (20a, 20b) is a n x n matrix. In one embodiment, the border of each matrix (20a, 20b) is black.

[00032] Referring to Figs. 3 and 4, according to an embodiment, once the descriptor (21) is recognized, the user is provided with aligned Images (10) with which the user may interact in an immersive or non-immersive environment. Preferably, once a descriptor (21) is recognized, an indication (30) is provided to the user as to a point of view from which the aligned Images (10) align with the real world allowing the user to align herself for interaction with the Image (10). According to an embodiment, the indication (30) is approximately orthogonal to the fiducial marker (20). According to an embodiment, the indication (30) is can detect and avoid non-orthogonal planes. [00033] Referring to Fig. 1A, when a user is interacting with aligned Images (10), in an immersive or non-immersive environment, the user may see an Image (10) embedded with a teleporter or hotspot (40). When the user hovers over the teleporter or hotspot (40), the user is taken to the same location (11) in another period, as shown in Fig. 1B.

[00034] In embodiments, each Image (10) that the user may interact with, in an immersive or non-immersive environment, may include earlier recorded annotations. According to an embodiment, the user may add additional annotation to the Image(s) (10), in an immersive or non-immersive environment. According to an embodiment, more than one user may annotate the Image(s) in an immersive or non-immersive environment, synchronously or asynchronously.

[00035] For exemplary purposes, the following application of the described platform is provided to help the reader more fully understand the platform. The platform should not be considered limited to the following applications.

[00036] One application of the above described platform is in the field of construction communication. Referring to Figs. 1A and 1B, an architect on a construction project may want to show stakeholders how a room may look when it is finished. The stakeholders may be in different locations around the world or in the same location. To facilitate this, the architect capture or records an Image (10), by one of the methods described above, of the unfinished room shown in Fig 1B; and renders a model or creates and Image (10) of how the room may look in the future, e.g. a finished room, shown in Fig 1A. The architect may annotate Fig. 1B to show where additional lighting will be added, for example. The architect will save the Images (10) in Figs. 1A and 1B with his annotation. A descriptor (21) will be used to associate the Images (10) to a point of view. Referring to Figs. 2A and 2B, a fiducial marker (20) having the appropriate descriptor (21) is placed in the location (11) where the aligned images were recorded, captured, or model.

[00037] Referring to Fig. 4, a stakeholder, in an immersive or non-immersive

environment, using a tethered or non-tethered augmented reality or virtual reality software, enters a location having a fiducial marker (20). The descriptor (21) is recognized, by one of the methods described above; consequently, Fig. 1B is shown on the stakeholder’s device. The stakeholder views and/or hears the annotation the architect has left; when the stakeholder hovers on the teleporter or hotspot (40), he is transported to the room shown in Fig. 1A. The architect may have previously annotated on or more of the Images (10); the stakeholder will experience those annotations as he interacts with an Image (10). In some cases, where the stakeholder is in a space that closely matches the current location (11), it may not be necessary to show an Image (10) that represents the current period.

[00038] There may be a second stakeholder that interacts with the Images (10) synchronously with the first stakeholder. Here the second stakeholder may Ink to draw another annotation regarding a lighting issue, for example; this interaction is recorded for later playback or asynchronous interaction. The architect, for example, may asynchronously, interact with the second stakeholder’s annotation of the Image(s) (10) later.

[00039] Now, as a second example, assume that at a later time, perhaps for maintenance many years later, the same process can be used by a maintenance person, electrician, or other tradesman to determine what is behind the finished product preventing inadvertent cutting of wires or plumbing, for example.

[00040] As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method, or computer product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, entirely software embodiment (including firmware, resident software, micro-code, etc.), or an embodiment combining software and hardware aspects. Further aspects of this invention may take the form of a computer program embodied in one or more readable medium having computer readable program code/instructions thereon. Program code embodied on computer-readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing. The computer code may be executed entirely on a user’s computer, partly on the user’s computer, as a standalone software package, a cloud service, partly on the user’s computer and partly on a remote computer or entirely on a remote computer, remote or cloud-based server.

- Si -