Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUGMENTED REALITY SYSTEM
Document Type and Number:
WIPO Patent Application WO/2012/007764
Kind Code:
A1
Abstract:
An augmented reality system in which an augmented reality server (12) communicates with a plurality of client devices (14) such as smart phones with cameras. The client device detects information concerning the context in which the client device is, transmits that to the server (12), and receives augmented reality data which renders an augmented reality scene to a user (18). The user can interact with the augmented reality scene and interaction data is transmitted to the server. When that user or another user is in the same or a related context and the augmented reality server transmits augmented reality data, that data depends on the interaction of the first user previously.

Inventors:
MOORE GEORGE GREER (GB)
JOHNSTON MATTHEW IAN (GB)
BRUNDLE TIMOTHY JOHN (GB)
Application Number:
PCT/GB2011/051330
Publication Date:
January 19, 2012
Filing Date:
July 15, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV ULSTER (GB)
MOORE GEORGE GREER (GB)
JOHNSTON MATTHEW IAN (GB)
BRUNDLE TIMOTHY JOHN (GB)
International Classes:
H04L29/06; H04L29/08
Foreign References:
EP1887526A12008-02-13
US20090049004A12009-02-19
EP1748370A12007-01-31
Other References:
TSAI CHO-NAN MICHAEL ET AL: "Paraworld: A GPS-Enabled Augmented Reality Gaming System", INTERNET CITATION, 2008, pages 1 - 10, XP002621482, Retrieved from the Internet [retrieved on 20110209]
YU LI ET AL: "Fiducial Marker Based on Projective Invariant for Augmented Reality", JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, KLUWER ACADEMIC PUBLISHERS, BO, vol. 22, no. 6, 17 November 2007 (2007-11-17), pages 890 - 897, XP019558333, ISSN: 1860-4749, DOI: 10.1007/S11390-007-9058-Y
Attorney, Agent or Firm:
DEHNS (10 Salisbury Square, London EC4Y 8JD, GB)
Download PDF:
Claims:
CLAIMS

1. A method of delivering augmented reality data to a plurality of users by means of an augmented reality data processing server in communication with a plurality of user client data processing devices across a communications network, said server including means for storing augmented reality data and means for delivering said augmented reality data to said client devices, each client device including means for receiving augmented reality data from said server and means for rendering said received data to the respective user as at least part of an augmented reality scene, the content of said augmented reality scene being determined by an attribute associated with the respective user, wherein each client device include means enabling the respective client to interact with the augmented reality scene and means for communicating information representing the user interaction to said augmented reality server, and wherein said server is configured to respond to said information representing the user interaction so as to determine which augmented reality data is to be delivered to a client device in respect of a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes, and/or to determine how to respond to subsequent user interaction by a user with a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes.

2. A method as claimed in claim 1 , wherein subsequent rendering of said augmented reality scene and/or said one or more other subsequently rendered augmented reality scenes may be made to any one or more of said plurality of users.

3. A method as claimed in claim 1 or 2, wherein the attribute associated with the user includes data relating to the context of the client device of the user and/or a user profile associated with the user.

4. A method as claimed in claim 3, wherein the data relating to the context of the client device of the user relates to one or more of time, date, orientation, location, motion, speed, ambient illumination and/or ambient temperature, and/or status with respect to another client device.

5. A method as claimed in any preceding claim, wherein the augmented reality scene includes a live image that has an overlay of information that augments the meaning of the image.

6. A method as claimed in claim 5, wherein the client device is a mobile communications device that has an imaging system for processing a live image of a real world scene in the vicinity of the user, and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the server; the server receives from the client device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and

communicates that augmented reality data to the client device; and the client device receives the augmented reality data from the server, and uses the augmented reality data to render to the user an augmented reality scene derived from the real world scene.

7. A method as claimed in any preceding claim, wherein augmented reality data is delivered to the user device from a data repository stored locally on the client device, in addition to augmented reality data from the server via the communication network.

8. A method as claimed in any preceding claim, wherein delivery of the augmented reality data is triggered by fiduciary markers and sensor data gathered by the client device.

9. A method as claimed in claim 1 , wherein the client device is a mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the user.

10. A method as claimed in claim 9, wherein the mobile communications device and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility receives from the

communications device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene; wherein the user uses the communications device to interact with the augmented reality scene; the communications device communicates to the remote data processing facility, data which represents the interaction with the augmented reality scene, the remote data processing facility modifies the stored augmented reality data associated with the real world scene, in accordance with the data which represents the interaction with the augmented reality scene; and when subsequently a second user is in the vicinity of the same real world scene and has a second mobile communications device which has an imaging system for processing a live image of the real world scene and a module which detects at least one attribute which identifies the real world scene and communicates data representing the or each attribute to the remote data processing facility, the remote data processing facility communicates the modified augmented reality data to the second communications device, and the second communications device receives the modified augmented reality data from the remote data processing facility and uses that modified augmented reality data to render an augmented reality scene derived from the real world scene.

1 1 . A method as claimed in claim 10, wherein the second user and any subsequent users may also interact with the augmented reality scene so as to further modify the augmented reality scene, and in each case the remote data processing facility stores modified augmented reality data to present to subsequent users in the vicinity of the same real world scene.

12. A method as claimed in claim 1 , wherein the client device is a mobile communications device that has an imaging system for processing a live image of a real world scene in the vicinity of the respective user, and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility receives from the communications device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene; wherein a first user uses a first communications device to interact with the augmented reality scene so rendered; the communications device communicates to the remote data processing facility, data which represents the interaction with the augmented reality scene; wherein a second user in the vicinity of the same real world scene has a second mobile communications device which has an imaging system for processing a live image of the real world scene and a module which detects at least one attribute which identifies the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility communicates to the second communications device, augmented reality data that has been modified in accordance with the data which represents the interaction of the first user with the augmented reality scene; the second communications device receives the modified augmented reality data from the remote data processing facility and uses that modified augmented reality data to render an augmented reality scene derived from the real world scene; and the second user uses the second communications device to interact with the augmented reality scene in a manner that is dependent on the interaction of the first user with the augmented reality scene.

13. An augmented reality system comprising an augmented reality server in communication with a plurality of clients across a communications network, said server including means for storing augmented reality data and means for delivering said augmented reality data to said clients, each client including means for receiving augmented reality data from said server and means for rendering said received data to a user as at least part of an augmented reality scene, the content of said augmented reality scene being determined by one or more attributes of the client, typically including the client's location, wherein each client includes means for determining the user's interaction with the augmented reality scene and means for communicating information representing said user interaction to said augmented reality server, and wherein said server is arranged to determine which augmented reality data is to be delivered to a client in respect of a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes depending on said user interaction information, and/or to determine how to respond to subsequent user interaction by a client with a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes depending on said user interaction information.

14. A method of delivering augmented reality data to a user having a mobile communications device, from a remote data processing facility over a

communications network, in which the mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the user, has a module which detects in the image a fiduciary marker identifying the real world scene, and has a sensor module which detects an attribute associated with the real world scene and communicates data representing the fiduciary marker and the attribute to the remote data processing facility; the remote data processing facility receives from the communications device the data representing the fiduciary marker and the attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene.

15. A method of delivering augmented reality data to a user having a mobile communications device, from a remote data processing facility over a

communications network, in which the mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the user, and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility receives from the communications device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the

communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene; wherein the user uses the communications device to interact with the augmented reality scene; the communications device communicates to the remote data processing facility, data which represents the interaction with the augmented reality scene, the remote data processing facility modifies the stored augmented reality data associated with the real world scene, in accordance with the data which represents the interaction with the augmented reality scene; and when

subsequently a second user is in the vicinity of the same real world scene and has a second mobile communications device which has an imaging system for processing a live image of the real world scene and a module which detects at least one attribute which identifies the real world scene and communicates data representing the or each attribute to the remote data processing facility, the remote data processing facility communicates the modified augmented reality data to the second communications device, and the second communications device receives the modified augmented reality data from the remote data processing facility and uses that modified augmented reality data to render an augmented reality scene derived from the real world scene.

16. A data processing system configured to carry out a method as claimed in any preceding claim.

17. Computer software containing instructions which carried out on elements of a data processing system will configure the elements to carry out a method as claimed in any of claims 1 to 15.

Description:
AUGMENTED REALITY SYSTEM

The present invention relates to augmented reality data processing systems.

Augmented Reality (AR) combines real world and computer-generated data.

Typically, an AR system superimposes computer generated data over an image of a real world environment. There are known systems in which an image of a real world environment is a direct view of that environment and the superimposed data is, for example, data of the sort displayed on a "head up" display for a pilot of an aircraft or the driver of an automobile such as speed. However, the present invention is particularly, but not exclusively, concerned with systems in which the image of the real world environment is obtained by an image capture device, such as a camera, and displayed on an electronic image display device. In addition, the computer generated data that is superimposed over the real world image on the electronic image display device is preferably related to content in the real world view.

There are increasingly powerful mobile computing devices that can be used in AR applications, including laptop and smaller computers, tablet computing devices such as the iPad™ and smart mobile phones such as the iPhone™. These can provide the user with access to mobile computing, location detecting systems using global positioning satellite signals and/or triangulation from mobile phone transmitters, telephony and numerous software applications.

In this type of AR environment, current AR development includes two areas, namely sensor analysis and video stream analysis (computer sight). Sensor analysis is where data is taken from sensors incorporated into the user's device (e.g.

accelerometer, GPS, digital compass, proximity meters, microphones, Bluetooth, light sensors) and the data fed into the processor which displays information on the screen based on the sensor input (location, heading proximity, ambient sound and light). Video stream analysis is where the video stream itself is interpreted and specific fiduciary markers are identified and recognised by a software engine running on the device and an appropriate response appears on screen. A fiduciary marker, or fiducial, is a marker applied to an object in a scene so that the object can be recognized in images of the scene. Typically, these markers appear as two- dimensional bar-codes but can be any pre-determined pattern, or even basic shape silhouettes. When such a marker is identified on static media such a billboard or building which is viewed through a suitable device, there is displayed additional data which can be any digital content including advertisements, public information and entertainment.

The market for AR applications has tended to take one of two approaches.

The first is advertising and design studios using software such as a FLARToolkit™, a fFlash™- based form of ARToolkit™, which is being used to build technology demonstrations for marketing purposes. These simply analyse the video stream, identify fiduciary markers and display a 3D image in place. Whilst these have drawn interest to AR, their application is primarily limited to such technology demonstrations.

The second approach, using sensor based technology, is currently providing advertising and public service information to end users using GPS and compass data. This might be to provide directions and an overlay on a view from a built-in camera of a device, regarding for example local bus and train stops, homes for sale or rent, or geo-coded data from services like Twitter™ or Brightkite™. Market uptake of GPS-equipped smart phones and tablet devices has created a demand for location aware services.

While these have proven to be initially popular approaches to Augmented Reality, they are largely inert, lacking social context.

A first aspect of the invention provides a method of delivering augmented reality data to a plurality of users by means of an augmented reality data processing server in communication with a plurality of user client data processing devices across a communications network, said server including means for storing augmented reality data and means for delivering said augmented reality data to said client devices, each client device including means for receiving augmented reality data from said server and means for rendering said received data to the respective user as at least part of an augmented reality scene, the content of said augmented reality scene being determined by an attribute associated with the respective user, wherein each client device include means enabling the respective client to interact with the augmented reality scene and means for communicating information representing the user interaction to said augmented reality server, and wherein said server is configured to respond to said information representing the user interaction so as to determine which augmented reality data is to be delivered to a client device in respect of a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes, and/or to determine how to respond to subsequent user interaction by a user with a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes.

In some embodiments, said subsequent rendering of said augmented reality scene and/or said one or more other subsequently rendered augmented reality scenes may be made to any one or more of said plurality of users.

The user interaction may be explicit, e.g. a user response to a rendered augmented reality scene, and/or implicit, e.g. by virtue of contextual data associated with the client device and/or user profile data.

Said attribute associated with the user may include data relating to the context of the client device of the user and/or a user profile associated with the user.

Contextual data may for example relate to one or more of time, date, orientation (e.g. orientation of the respective client device), location, motion, speed, ambient illumination and/or ambient temperature, and/or status with respect to another client device.

In some embodiments, the system is arranged to change in response to user interactions, particularly such that one or more subsequently rendered augmented reality scene is different to one or more corresponding previously rendered scene. The respective scenes may relate to the same real world location(s) and/or to associated other real world location(s). The respective scenes may relate to the same client(s)/ user device(s) and/or to other client(s)/ user device(s). The changes to the system may include changes to the data rendered to the client device(s) as part of the subsequently rendered augmented reality scene(s) and/or changes in the manner in which the system, and in particular the server, responds to user interactions with said subsequently rendered augmented reality scene(s).

The user interaction may create new augmented reality data for rendering as part of one or more subsequently rendered augmented reality scene(s) - for example new media asset data or other data created by or stored on the client and

communicated to the server. Alternatively or in addition, said user interaction may cause existing augmented reality data to be modified an/or cause the server to select alternative augmented reality data from an existing repository for rendering as part of one or more of the subsequently rendered augmented reality scene(s).

Preferably, an augmented reality scene includes a live image that has an overlay of information that augments the meaning of the image. Thus, in some embodiments the client device is a mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the user, and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the server, which is a remote data processing facility; the server receives from the client device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the client device; and the client device receives the augmented reality data from the server, and uses the augmented reality data to render to the user an augmented reality scene derived from the real world scene.

Thus, preferred embodiments of the invention can exploit developments at the interface of smart phone platform technology (which can capture live information) and virtual world engines (which provide a web environment in which geographic information can be entered).

Preferred embodiments of the invention are applicable to several fields including: enabling the use of decision support for complex tasks, architecture and surveying, tourism and entertainment, virtual devices, games and advertising. ln some embodiments of the invention, content data is delivered to a user device from a database or other data repository stored locally on the client device, in addition to data from the server via a communication network. A preferred system is able to deliver to the clients data stored both remotely on the content server and locally on the client device. The data may be pre-determined (in which case it may be static or variable) and/or may be created by users of the client devices and/or from users' interactions with the system.

Viewed from an alternative aspect of the invention, there is provided a method of delivering augmented reality data from a repository of augmented reality data held on a user client data processing device, wherein the client device includes means for receiving augmented reality data from said repository and means for rendering said received data to the user as at least part of an augmented reality scene, the content of said augmented reality scene being determined by an attribute associated with the user, wherein the client device includes means enabling the client to interact with the with the augmented reality scene and means for communicating information representing the user interaction to a data processing module associated with the repository, and wherein said data processing module is configured to respond to said information representing the user interaction so as to determine which augmented reality data is to be delivered to the client device in respect of a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes, and/or to determine how to respond to subsequent user interaction by the user with a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes.

In embodiments of the aspects of the invention, delivery of the augmented reality data is triggered by fiduciary markers and/or data gathered by the client device, e.g. sensor data. Advantageously, the system supports a transaction system for altering the behaviour of triggers, and or the behaviour of the system (especially in response to said triggers), depending on interactions from multiple users.

For example, in a multi-player game, which involves the discovery and capture of several geographically and/or temporally located control points, the behaviour of the players in this game is determined by the interactions of other players within the game. The interactions will control the movement of players throughout the game as they attempt to capture the control points (which may change location as the game proceeds). Hence, players are caused to change their behaviour and physical location during the playing of a game due to the interactions of other players in the game, some of which may be human and others may be virtual.

Some preferred embodiments of the invention enable the interactions of users in a system to influence the outcomes and actions of other users of the system in a real world environment using both sensor and fiduciary marker-based augmented reality.

In some arrangements in accordance with the invention, the system includes means for marking the target environment, e.g. fiduciary markers, geographical and/or temporal data, as well as means for sensing the presence of markers within that environment, e.g. camera, clock, GPS and/or compass. Conveniently, this can be implemented using suitably equipped smart phones as client devices.

The system may also include, or be co-operable with (e.g. via an API), a social engine to provide a social context for the target environment and the user interactions that will take place within it. In this context a social engine comprises a system that supports online social networking.

In typical embodiments, the system supports an augmented environment with which users of client devices can interact. The augmented environment is defined through the combination of existing data relating to the environment together with additional markers, geographical and/or temporal data provided for the purpose of marking-up the environment, and media assets (which may for example comprise text, audio, video and/or images being renderable to the user via his client device) developed to augment the environment. Said existing data may comprise preexisting data indicating the nature of the environment, e.g. indicating if a given location is indoors or outdoors, a shop, a public space, and so on. Said existing data may also comprise data indicating the state of user interactions with the environment, e.g. who has interacted with which aspects of the environment and what is the current state of the environment as a result of said interactions. Users are able to interact with the system in order to discover the data and assets that have been embedded within the augmented environment and to alter the existing data or media asset based on the nature of their interactions with the system. In this context, the term "asset" embraces various forms of relatively complex user renderable data, e.g. text, audio, video and/or images, whereas "data" is intended to embrace other data such as the name of a location, or a fact relating to a location, or an indication of the system state with respect to a location, and so on. It will be understood that all of these forms of data and assets may be referred to more generally as "data".

By way of example, should a user take possession of a media asset that represents an artefact within the augmented environment, then that asset will no longer be available to other users of the system. Similarly, should a user create or modify an asset or data item within the environment other users will be able to discover and interact with that modified version. More subtly, using the example of a virtual, i.e. computer generated, character that has been embedded within the environment, each user may experience different reactions or engage in a different version of predetermined dialogue with the character dependent on previous interactions between the character and other users.

From the above it can be seen that the novelty of the approach in accordance with some embodiments of the invention is twofold. Firstly in the combination of markup techniques not currently combined and secondly in the use of ongoing interactions with the system to allow for user generated data to combine with and alter the state of the system.

The rendering of an augmented reality scene may comprise overlaying augmented reality data on to an image of a real world scene. However, the rendering could include only adding audio or other non-visual sensory characteristics to the real world scene.

In embodiments of the invention, user interactions change what data is served to the user and earlier interactions have the ability to change the underlying data. As such, the system subsequently serves a revised version of the scene element that, for example, actively interacts with the user in a different way. An example is virtual waiter in a coffee shop. Interaction might start when the presence of a customer in the coffee shop has been detected, and possibly not until the system has also sensed that the customer is seated. Once an initial customer has interacted with the virtual waiter, then subsequent interactions with the virtual waiter for that and other customers are different, changed by that initial interaction.

Embodiments of the present invention can actively sense a user's presence and actions, to a degree, respond to them in an appropriate manner, record the interactions and change the nature of what the system does next and in the future, in a truly dynamic manner. As a result of interactions, active and passive, the scene elements are modified. In subsequent interactions some data might never be presented; some user interactions in the physical space or with the scene element might result in a different, or no, interaction being initiated; and so on.

Viewed from a further aspect, the present invention provides a method of delivering augmented reality data to a user having a mobile communications device, from a remote data processing facility over a communications network, in which the mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the user, and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility receives from the communications device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene; wherein the user uses the communications device to interact with the augmented reality scene; the communications device communicates to the remote data processing facility, data which represents the interaction with the augmented reality scene, the remote data processing facility modifies the stored augmented reality data associated with the real world scene, in accordance with the data which represents the interaction with the augmented reality scene; and when

subsequently a second user is in the vicinity of the same real world scene and has a second mobile communications device which has an imaging system for processing a live image of the real world scene and a module which detects at least one attribute which identifies the real world scene and communicates data representing the or each attribute to the remote data processing facility, the remote data processing facility communicates the modified augmented reality data to the second communications device, and the second communications device receives the modified augmented reality data from the remote data processing facility and uses that modified augmented reality data to render an augmented reality scene derived from the real world scene.

In preferred embodiments of this aspect of the invention, the second user and any subsequent users may also interact with the augmented reality scene so as to further modify the augmented reality scene, and in each case the remote data processing facility will store modified augmented reality data to present to subsequent users in the vicinity of the same real world scene.

This aspect of the invention thus provides interaction between a number of users, via the augmented reality system. A user can modify the augmented reality scene on that user's communications device, for example by adding an object to the scene or removing an object from the scene, and the modifications will be stored by the remote data processing facility and will be used when the same user or a different user is presented with an augmented reality scene derived from the same real world scene.

Viewed from a further aspect, the present invention provides a method of delivering augmented reality data to a plurality of users each having a respective mobile communications device, from a remote data processing facility over a

communications network, in which each mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the respective user, and has a module which detects at least one attribute identifying the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility receives from the communications device the data representing the or each attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene; wherein a first user uses a first communications device to interact with the augmented reality scene so rendered; the communications device

communicates to the remote data processing facility, data which represents the interaction with the augmented reality scene; wherein a second user in the vicinity of the same real world scene has a second mobile communications device which has an imaging system for processing a live image of the real world scene and a module which detects at least one attribute which identifies the real world scene and communicates data representing the or each attribute to the remote data processing facility; the remote data processing facility communicates to the second communications device, augmented reality data that has been modified in accordance with the data which represents the interaction of the first user with the augmented reality scene; the second communications device receives the modified augmented reality data from the remote data processing facility and uses that modified augmented reality data to render an augmented reality scene derived from the real world scene; and the second user uses the second communications device to interact with the augmented reality scene in a manner that is dependent on the interaction of the first user with the augmented reality scene.

Viewed from another aspect, the invention provides an augmented reality system comprising an augmented reality server in communication with a plurality of clients across a communications network, said first server including means for storing augmented reality data and means for delivering said augmented reality data to said clients, each client including means for receiving augmented reality data from said server and means for rendering said received data to a user as at least part of an augmented reality scene, the content of said augmented reality scene being determined by one or more attributes of the client, typically including the client's location, wherein each client includes means for determining the user's interaction with the augmented reality scene and means for communicating information representing said user interaction to said augmented reality server, and wherein said server is arranged to determine which augmented reality data is to be delivered to a client in respect of a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes depending on said user interaction information, and/or to determine how to respond to subsequent user interaction by a client with a subsequent rendering of said augmented reality scene and/or one or more other subsequently rendered augmented reality scenes depending on said user interaction information.

Viewed from a further aspect, the present invention provides a method of delivering augmented reality data to a user having a mobile communications device, from a remote data processing facility over a communications network, in which the mobile communications device has an imaging system for processing a live image of a real world scene in the vicinity of the user, has a module which detects in the image a fiduciary marker identifying the real world scene, and has a sensor module which detects an attribute associated with the real world scene and communicates data representing the fiduciary marker and the attribute to the remote data processing facility; the remote data processing facility receives from the communications device the data representing the fiduciary marker and the attribute, retrieves, from a store of augmented reality data, augmented reality data associated with the real world scene, and communicates that augmented reality data to the communications device; and the communications device receives the augmented reality data from the remote data processing facility, and uses the augmented reality data to render an augmented reality scene derived from the real world scene.

In a preferred embodiment of this aspect of the invention, the user uses the communications device to interact with the augmented reality scene; the

communications device communicates to the remote data processing facility, data which represents the interaction with the augmented reality scene, the remote data processing facility modifies the stored augmented reality data associated with the real world scene, in accordance with the data which represents the interaction with the augmented reality scene; and when subsequently a second user is in the vicinity of the same real world scene and has a second mobile communications device which has an imaging system for processing a live image of the real world scene and has a module which detects in the image a fiduciary marker identifying the real world scene, and has a sensor module which detects an attribute associated with the real world scene and communicates data representing the fiduciary marker and the attribute to the remote data processing facility, the remote data processing facility communicates the modified augmented reality data to the second communications device, and the second communications device receives the modified augmented reality data from the remote data processing facility and uses that modified augmented reality data to render an augmented reality scene derived from the real world scene.

In respect of the various aspects of the invention, the invention may be expressed as a method; as a data processing system comprising a server and one or more user devices communicating over a communications network; or as computer software for programming a server and / or a user device so as to be configured as part of the data processing system or so as to be configured to carry out the method. Computer software may be provided in tangible form, such as recorded on a DVD or another memory device, or may be provided by data communication from a remote location, for example as a download over the Internet. Such software will contain instructions which when carried out on processors of elements of a data processing system will cause those elements to be configured to carry out a method in accordance with the invention.

Some embodiments of the invention will now be described by way of example, and with reference to the accompanying drawings, in which:

Figure 1 shows a block diagram of an augmented reality system embodying the invention; and

Figure 2 shows a flowchart illustrating the preferred operation of the system of Figure 1.

Referring now to figure 1 of the drawings there is shown, generally indicated as 10, an augmented reality system embodying the invention. The system 10 comprises an augmented reality server 12 that is capable of communicating with a plurality of clients 14 (only one shown) across a telecommunications network (indicated generally as 16). The network 16 may take any suitable form, for example a telephone network, especially a mobile (cellular) telephone network, or a computer network, e.g. the internet, or any suitable combination of telecommunications networks. ln use, the system supports an augmented environment with which users 18 can interact via the clients 14. The augmented environment is defined through the combination of existing data relating to the environment together with additional markers, or triggers, such as fiduciary markers (which are typically physical markers provided in the real world and being detectable by the clients 14, typically via a digital camera), geographical triggers and/or temporal triggers (which are activated when the client 14 is at a corresponding geographical location or a corresponding time/date respectively). In response to the detection or activation of a

marker/trigger, the system 10 is arranged to cause one or more media assets to be rendered to the user via the client 14. The media assets typically comprise text, audio, video and/or images. As is described in more detail hereinafter, users are able to interact with the system in order to discover the data and assets that have been embedded within the augmented environment and to alter the existing data or asset based on the nature of their interactions with the system.

In the preferred embodiment, the augmented reality server 12 comprises a decision control server 26 and a content server 28 arranged for communication with one another and to support a content release/acceptance protocol.

The client 14 comprises a software client 14A supported by a computing device 14B. In use, the software client 14A communicates with the augmented reality server 12 while the computing device 14B provides an interface between the software client 14A and a user 18, as well as facilitating communication with other entities as is described in more detail hereinafter. Typically, the computing device 14B includes means for communicating wirelessly with the network 16. Most embodiments of the device 14B include a display screen, e.g. an LCD display or other VDU, means for connecting to one or more communication networks

(including network 16) and an ability to process and display data to the user. The device 14B typically also includes one or more user input devices, e.g. a key pad, touch screen, microphone and/or mouse. Ideally the device 14B is configured to receive data, including updates, from one or more external and/or internal devices. For example, the device 14B may include one or more of a clock, camera, temperature sensor, accelerometer, digital compass, proximity meter, microphone, Bluetooth connectivity, light sensor or other sensor (not shown), especially sensors for enabling the device 14B to interact with the external environment 24, and/or the device 14B may be capable of communication with one or more remote devices such as one or more out of band servers 20 (only one shown), or a satellite/GPS transmitter 22, or other remote computing device via any suitable communications network, e.g. a telephone network and/or the internet. Conveniently, the device 14B takes the form of a portable device such as a mobile telephone, especially a smart phone, PDA (personal digital assistant), a connected tablet or netbook, a portable laptop or other personal computing device. The device 14B may also take the form of a desktop computer.

During use, the sensors, communication links and other tools/devices associated with the client device 14B enable the device 14B to gather contextual data concerning its current circumstances. The contextual data may for example relate to one or more of time, date, orientation (e.g. orientation of the respective client device), location, motion, speed, ambient illumination and/or ambient temperature, and/or status with respect to another client. Location data may be obtained from a GPS receiver and/or by detecting a fiduciary marker (typically by means of a digital camera). This data is collected by the client device 14B, processed by the software client 14A and, as appropriate, communicated to the decision control server 26 by the client 14A. The decision control server 26 responds to the contextual data sent to it by the client 14 as is described in more detail hereinafter.

The, or each, out of band server 20 provides ancillary services to the system 10 providing services that go outside of the client/server paradigm. Such services may include Push updates, information received by SMS text message, email, Wave and/or other services. By way of example, an update or other information sent by the server 20 may comprise a message concerning the state of the system 10 and in particular concerning changes in the state of the system 10, for example changes in the supported virtual environment as a result of user interactions, including messages concerning the status of the user and/or users of other clients in the system 10.

The software client 14A runs on the client device 14B and renders data to the user 18. The data includes content received from the augmented reality server 12 via the network 16 and may also include data received from any sensor associated with the device 14B and/or any other server/device 20, 22 with which the device 14B is in communication. The data may additionally or alternatively comprise data stored locally in the client device 14B (in any suitable storage device) or in a storage device to which the client device 14B is connected during use. The client 14A may render the data to the user by means of one or more visual, or graphical, overlays on a live video display produced by the camera of the client device 14B (this is commonly known as an Augmented Reality 'Magic Lens'). Alternatively, or in addition, data may be rendered to the user 18 as a media asset (e.g. text, audio, video, images). Typically, the data is rendered to the user 18 via the display device of the client 14B, although at least some of the data may be rendered to the user by means of an audio rendering device, e.g. a speaker. Data that is deliverable to a user may comprise a media asset and/or other forms of data.

In use, the software client 14A processes information received by the client device 14B. The received data may be received in any convenient form and by any convenient means, e.g. Push data or SMS from an out of band server 20, GPS location data from a satellite or other GPS device, sensor data from one or more senor provided on or connected to the device 14B, or interactions from the end user 18 via one or more user interface (which may for example include one or more of a key pad, touch screen, microphone, mouse or other input device). Should the client device 14B not be able to receive some data, the software client 14A

communicates this to the decision control server 26. Advantageously, the software client 14A is arranged to cache relevant data in a local memory device (not shown) on the client device 14B to mitigate against network timeouts.

The content server 28 comprises a repository of data that is deliverable to the client 14, at least some of which is renderable to the user 18. The data typically includes text (including scripts), audio data, images (e.g. bitmaps and/or vectors) and video data. The data is delivered to the client 14 if appropriate permission is received from the decision control server 26 via the content/release acceptance protocol. The client device 14B includes a storage device, typically a cache memory, for storing content data received from the content server 28, the cached data being accessible by the software client 14A.

The data stored by the content server 28 typically includes pre-generated, and typically also static, data. It is preferred, however, that the content server 28 can receive content data from the client 14 for delivery to any client 14 in

communication with the server 12. Such client-created content may for example be created by the user 18 via the user interface of the device 14B and/or by the client device 14B itself based on, for example, data it receives from a sensor or other device/server such as the out of band server 20 or satellite 22.

The decision control server 26 controls the operation of the augmented reality server 12 and its interactions with the clients 14 and other external entities such as out of band servers 20. The decision control server 26 supports one or more algorithms that can use scripts stored in the content server 28 and execute them. The decision control server 26 is also able to communicate with the client 14 in order to receive data from the client 14. The data received from the client 14 typically comprises contextual data, such as data concerning the current status of the client, e.g. location, ambient temperature, speed, position relative to another client, or other data obtained or derived from the client's interaction with the external environment (e.g. via a sensor) or with an external entity such as an out of band server 20. The data received from the client 14 may also comprise data generated by the user 18, for example a user request, query or response. The decision control server 26 is arranged to respond to data received from the client 14. The response may involve executing one or more algorithms and/or scripts and causing content data to be delivered to the client 14. Contextual data may be communicated in both directions between the client 14 and server 26, e.g. going to the client 14 to update the context within which the content is delivered. It is noted that not all contextual data is generated by the user of a given client device, e.g. another user might have interacted with the environment in a way that changes the context.

The decision control server 26 may also be arranged to determine whether received content data at the content server 28 is to be made freely available to any client 14, or whether restrictions will be applied, e.g. as a result of licensing terms.

Due to the range of sensor data available, the decision control server 26 is advantageously able to handle missing or incomplete, non-key, datasets by providing one or more default values (which cannot sway the decision process). Key data cannot be defaulted in this way, i.e. data that is essential for the current process. Hence, the server 26 is able to handle missing data when establishing context. For example, it might not be possible to get a GPS position for a user, but if the user's location is known to a sufficient degree and GPS precision is not key to the ongoing interaction, then it need not halt the process. "Key data" is context sensitive and so any item of data could be key given an appropriate context.

The preferred content release/acceptance protocol supports documentation of the content size, applicability, availability, permissions and licensing of

data objects received and released by the content server 28.

In the preferred embodiment the decision control server 26 and the software client 14A support a contextual data/interactions protocol governing the communication between the decision control server 26 and the software client 14 of user interactions and contextualised sensor data, or other data gathered by the client device 14B. The protocol preferably also defines the permissions for release of cached content data on the client device 14B.

During use, the decision control server 26 may send status updates to the out of band server 20 for communication to the client 14. Such updates may comprise a message for one or more users concerning the state of the system 10 and in particular concerning changes in the state of the system 10, for example changes in the supported virtual environment as a result of user interactions, including messages concerning the status of the user and/or of users of other clients in the system 10.

Figure 2 illustrates the preferred operation of the system 10 by means of a flow chart. At 201 the software client 14A is launched on the client device 14B, typically by user activation. At 203, a check is made to establish of the user 18 is logged in. If not, a login sequence is initiated (205), which results in a user profile being loaded (207). The profile is conveniently stored and loaded from the remote server 12 and cached on the client device 14 during use. The profile may include data indicating who the user is, past achievements, data relating to ongoing games, their last known context, and so on. The profile helps to identify the user, what activity they are engaged in and would inform their context. Normally, the profile is used by both the client 14A and the server 12. When the user is logging in and a user profile has been loaded, the client 14A determines the location of the client device 14B (209). This can be achieved by any convenient means, e.g. from a GPS receiver provided in the client device 14B. The client 14A retrieves data that is associated with the determined location of the client device 14B (21 1 ). The data may be retrieved from the augmented reality server 12 and/or from local storage on the client device 14B as applicable, e.g. if the relevant data is not stored on the client device, it may be retrieved from the server 12. To retrieve data from the augmented reality server 12, the client 14A sends the location data to the decision control server 26 in response to which the decision control server 26 determines which data should be sent to the client 14A and causes this to be delivered from the content server 28. Optionally, the retrieved data may be determined not only by the determined location but also by one or more aspects of the user profile. The retrieved data may comprise one or more media assets (e.g. text, audio file, video file, image) and/or other data, e.g. data indicating the nature of the environment, for example indicating if a given location is indoors or outdoors, a shop, a public space, and so on. The data may also comprise data indicating the state of user interactions with the environment, e.g. who has interacted with which aspects of the environment and what is the current state of the environment as a result of said interactions.

The client 14A then constructs an augmented scene for rendering to the user 18 (213). Typically, the augmented scene is constructed using not only data retrieved from the augmented server 12, but also data retrieved from local memory and/or data obtained from one or more sensors or other devices included in the client device 14B or with which the client device 14B is in communication (e.g. a clock, camera, temperature sensor, accelerometer, digital compass, proximity meter, microphone, Bluetooth port, light sensor).

At 215, the client 14A causes the augmented scene to be rendered to the user 18. Typically, the scene is rendered via the client device's display and/or audio output device, but may also, or alternatively, be rendered by any other means available to the device 14B, e.g. a vibrator. In preferred embodiments where the client device 14B has a digital camera with a display screen that is capable of rendering live video to the user, at least part and preferably all (or at least all that is capable of being rendered visually) of the augmented scene is rendered to the user as a visual overlay on the live video from the camera.

At 217, the client 14A allows the user's interaction with the augmented scene (and/or any other aspect of the augmented reality environment that is associated with the location of the client device 14B and/or the user's profile) to be determined such that it can be used to change the system 10 if appropriate. The user's interaction may be explicit (e.g. the user 18 may respond to the rendered augmented scene using an input device provided on the client device 14B), and/or may be implicit (e.g. by means of one or more aspects of the contextual data determined by the client device 14B and/or one or more aspects of the user's profile and/or otherwise automatically implemented as a result of the user's interaction with the system, e.g. the fact that a particular media asset or other data is rendered to a user may subsequently change the state of the system ).

At 219, the user's interaction is used to modify the system 10. In the preferred embodiment, the client 14A sends data relating to the determined user interaction to the decision control server 26, in response to which the decision control server 26 determines what changes should be made to the content data stored in the content server 28. As a result of changes made to the system 10, the augmented scene that would be rendered to a subsequent client, in respect of which the relevant aspect(s) of contextual data (e.g. location, temperature and so on) and/or user profile are the same as for the previous client that caused the change, would be different in at least some respects.

At 221 , the client 14A updates the user's profile and communicates the updated information to the augmented server 12. Updates may relate to changes in the current context due to the most recent interaction, e.g. a move to a new location, achievement of a goal, etc.

At 223, if the user has not requested an exit, then steps 209 to 221 are repeated. As a result of the user's previous interaction with the system, this may cause the augmented scene that is now rendered to him to be different than the scene previously rendered to him (even if his location and/or other contextual data has not changed). An example of the operation of the system 10 is now described in the context of a multi-player game (each player being a user of a respective client 14) in which a media asset in the form of a virtual (computer generated) flag has to be captured. Player 1 launches the client 14A determines the location of the flag in the augmented reality environment supported by the system 10. Meanwhile, player 2 also enters the augmented reality environment, determines the location of the flag begins navigating to the flag's location using augmented reality scenes that are rendered to him via his client device. It is assumed that player 1 successfully navigates to the flag location first, using the augmented display. On reaching the location, player 1 "captures" the flag (e.g. by explicit or other user interaction with rendered augmented reality scene) and the flag is recorded as captured by his client 14. In response to being notified by the client 14 that the flag has been captured, the decision control server 26 updates the content server 28 such that the media asset representing the flag is removed from play, i.e. it will no longer be rendered as part of any augmented reality scene that is rendered in respect of that location. The system may notify the other player(s) of this change.

In this example it is noted that the state of the virtual asset (in this case the flag) being presented has been changed as the direct result of user interaction. While the flag in this case has no actual value this need not be the case - the user interaction could change the state of the asset such that it is subsequently rendered in a different form. Moreover, the nature of state changes can be more subtle.

Another example is now described in the context of a real world coffee shop which is associated with the augmented reality system 10. Player 1 activates his client 14A when in the shop. The system 10 causes an augmented reality scene to be rendered to him (as determined for example by the determined location of the client device 14B and/or a fiduciary marker). The augmented reality scene (which in this example could be regarded as a virtual user) allows player 1 to avail of a promotion by implicit or explicit user interaction. For example, the scene may include a question for player 1 , e.g. would you like a free coffee refill? If player 1 interacts by accepting (which may require an explicit acceptance or an implicit one), then the media asset that represents the promotion is modified to take the acceptance into account. Depending on the promotion, this could result in the promotion not being available to subsequent players, or being available in a modified form, or an alternative promotion being rendered to a subsequent user.

Should a further player attempt to engage with the system 10 before player 1 's interaction is determined and used to update the system, then the system is preferably arranged to render a holding augmented reality scene to the subsequent player, with which the subsequent player may not be able to interact (at least not in respect of the promotion), until the previous player's interaction has been used to update the system at which time the updated augmented reality scene is rendered to the subsequent user.

For example, in the present case, if player 2 arrives at the same coffee shop while player 1 is still interacting, player 2 may be informed by his client that the virtual waiter is busy.

Optionally, one user may be allowed to provide content data that is deliverable to a subsequent user. In the present example, the virtual waiter may ask player 1 to set a question (e.g. by selecting a question from a list of possible questions).

The virtual waiter is now able to serve player 2 and asks if he would like to attempt to answer today's free coffee refill question. He accepts and is presented with the question set by player 1. If he gets it right he gets a free refill.

In this case player 1 has won the set goal but the goal is still obtainable by other players in accordance with the conditions set by the system and by the player who won the goal.

It is envisaged that embodiments of the invention could be used in the following applications (without limitation):

Enabling the use of decision for complex tasks and training. Vertical markets for this include learning for standardised education, career progression, physical handling, assistance for individuals with dementia.

Architecture and surveying for the overlay of historic buildings, for planned construction projects and the analysis of terrain characteristics. Tourism and entertainment providing a location-aware trans-media experience such as a location-aware tourist information guide or virtually located live performances.

Virtual devices overlaying a moving image onto a plain billboard with an internet-sourced feed replacing expensive and fragile electronic billboards.

Games providing a multi-user interactive experience with sensor- and marker-based AR event triggers.

Advertising with the ability to customise content depending on individual context (especially relevant for targeting advertisements within a trans- media experience to individuals).

The invention is not limited to the embodiments described herein, which may be modified or varied without departing from the scope of the invention.

In some embodiments of the invention, there is provided an augmented reality system in which an augmented reality server communicates with a plurality of client devices such as smart phones with cameras . The client device detects information concerning the context in which the client device is, transmits that to the server and receives augmented reality data which renders an augmented reality scene to a user. The user can interact with the augmented reality scene and interaction data is transmitted to the server. When that user or another user is in the same or a related context and the augmented reality server transmits augmented reality data, that data depends on the interaction of the first user previously.