Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM FOR GENERATING VIRTUAL THREE DIMENSIONAL PROJECTIONS
Document Type and Number:
WIPO Patent Application WO/2005/107274
Kind Code:
A1
Abstract:
A system and method for generating virtual three dimensional projections in head mounted displays. Sensing means senses and recognises identification insignia present on physical objects. Processing means then generate three dimensional projections and display them at corresponding head mounted displays in such a way that a projection appears to be on top of a corresponding object. Can handle input from two or more sensing means and output for two or more corresponding displays, thereby allowing two or more users to use the system at the same time. Provides individual three dimensional projections for each user. Suitable for introducing augmented reality in traditional board games.

Inventors:
NIELSEN BJOERN WINTHER (DK)
KRISTENSEN SUNE (DK)
ANDERSEN TROELS LANGE (DK)
Application Number:
PCT/DK2005/000246
Publication Date:
November 10, 2005
Filing Date:
April 12, 2005
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AUGMENTED MEDIA APS (DK)
NIELSEN BJOERN WINTHER (DK)
KRISTENSEN SUNE (DK)
ANDERSEN TROELS LANGE (DK)
International Classes:
A63F13/02; A63F13/12; H04N13/00; (IPC1-7): H04N13/00
Domestic Patent References:
WO2003034397A12003-04-24
Other References:
BILLINGHURST M ET AL: "Mixing realities in Shared Space: an augmented reality interface for collaborative computing", MULTIMEDIA AND EXPO, 2000. ICME 2000. 2000 IEEE INTERNATIONAL CONFERENCE ON NEW YORK, NY, USA 30 JULY-2 AUG. 2000, PISCATAWAY, NJ, USA,IEEE, US, vol. 3, 30 July 2000 (2000-07-30), pages 1641 - 1644, XP010512823, ISBN: 0-7803-6536-4
PATRICK SINCLAIR, KIRK MARTINEZ, DAVID E. MILLARD, MARK J. WEAL: "Links in the palm of your hand: tangible hypermedia using augmented reality", PROCEEDINGS OF THE THIRTEENTH ACM CONFERENCE ON HYPERTEXT AND HYPERMEDIA, 2002, College Park, Maryland, USA, pages 127 - 136, XP002338480
BROLL W ET AL ASSOCIATION FOR COMPUTING MACHINERY: "THE VIRTUAL ROUND TABLE - A COLLABORATIVE AUGMENTED MULTI-USER ENVIRONMENT", PROCEEDINGS OF THE 3RD. INTERNATIONAL CONFERENCE ON COLLABORATIVE VIRTUAL ENVIRONMENTS. CVE 2000. SAN FRANCISCO, CA, SEPT. 10 - 12, 2000, PROCEEDINGS OF THE INTERNATIONAL CONFERENCE ON COLLABORATIVE VIRTUAL ENVIRONMENTS, NEW YORK, NY : ACM, US, vol. CONF. 3, 10 September 2000 (2000-09-10), pages 39 - 45, XP001075664, ISBN: 1-58113-303-0
CHRISTIANE ULBRICHT AND DIETER SCHMALSTIEG: "Tangible Augmented Reality for Computer Games", VIIP CONFERENCE PROCEEDINGS, IASTED, September 2003 (2003-09-01), ACTA Press, pages 950 - 954, XP002338481
REITMAYR G ET AL: "Mobile collaborative augmented reality", AUGMENTED REALITY, 2001. PROCEEDINGS. IEEE AND ACM INTERNATIONAL SYMPOSIUM ON NEW YORK, NY, USA 29-30 OCT. 2001, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 29 October 2001 (2001-10-29), pages 114 - 123, XP010568054, ISBN: 0-7695-1375-1
SCHMALSTIEG D ET AL: "THE STUDIERSTUBE AUGMENTED REALITY PROJECT", PRESENCE, CAMBRIDGE, MA, US, vol. 11, no. 1, 1 February 2002 (2002-02-01), pages 33 - 54, XP008011893, ISSN: 1054-7460
Attorney, Agent or Firm:
Inspicos, A/s (P.O. Box 45, Hørsholm, DK)
Download PDF:
Claims:
CLAIMS
1. A system for generating virtual three dimensional projections in head mounted display means, the system comprising: a plurality of movable physical objects, each object belonging to a subgroup of objects, and each object being provided with an identification insignia identifying to which subgroup each object belongs, first and second sensing means adapted to sense the identification insignia on the objects and to produce an output signal in accordance with a sensed identification insignia, the first sensing means being located in a first position, and the second sensing means being located in a second position, the first and second positions being at different spatial locations, the head mounted display means comprising first display means adapted to display a projection generated on the basis of a signal produced by the first sensing means, the first display means being located in a third position, the head mounted display means comprising second display means adapted to display a projection generated on the basis of a signal produced by the second sensing means, the second display means being located at a fourth position, the third and fourth positions being at different spatial locations, processing means adapted to process signals received from the first/second sensing means in accordance with the sensed identification insignia, thereby generating a three dimensional projection, the processing means further being adapted to display said projection on the first/second display means in such a way that said three dimensional projection, to a user wearing the head mounted display means, appears to be positioned on top of the corresponding object.
2. A system according to claim 1, wherein the first and second sensing means are further adapted to sense a relative position between two objects, wherein the processing means is further adapted to generate at least one three dimensional animated sequence when two objects are positioned at a predefined relative position, and wherein the first and second display means are adapted to display said three dimensional animated sequence(s) in such a way that it/they appear(s) to be positioned on top of the corresponding objects.
3. A system according to claim 2, wherein the predefined relative position between two objects is defined by a distance between said objects being smaller than a predefined distance.
4. A system according to claim 2 or 3, wherein each object comprises a set of attributes which is modifiable in response to an input supplied to the processing means.
5. A system according to claim 4, wherein the input supplied to the processing means is generated on the basis of one or more three dimensional animated sequences generated by the processing means.
6. A system according to claim 5, wherein at least one of the three dimensional animated sequences is in the form of a battle between the two objects positioned at the predefined relative position, and wherein the input is generated on the basis of an outcome of said battle.
7. A system according to any of claims 46, further comprising storage means for storing a modified set of attributes, the modified set of attributes thereby replacing a previous set of attributes for a later purpose.
8. A system according to any of the preceding claims, wherein the head mounted display means comprises is a pair of goggles.
9. A system according to claim 8, wherein said pair(s) of goggles is/are transparent.
10. A system according to any of the preceding claims, wherein the first position is at least substantially spatially coinciding with the third position and/or the second position is at least substantially spatially coinciding with the fourth position.
11. A system according to claim 10, wherein the first sensing means is mounted on or forms part of the first display means, and/or the second sensing means is mounted on or forms part of the second display means.
12. A system according to any of the preceding claims, further comprising one or more additional sensing means and one or more additional corresponding display means.
13. A system according to any of the preceding claims, further comprising a delimited area within which the objects are positioned.
14. A system according to any of the preceding claims, wherein the number of subgroups is smaller than the number of objects.
15. A system according to any of the preceding claims, wherein the identification insignia comprise visible marks, and wherein the first and/or second sensing means comprise(s) optical sensing means.
16. A system according to any of the preceding claims, wherein the identification insignia comprise Radio Frequency Identification (RFID) tags, and wherein the first and/or second sensing means comprise(s) a radio frequency receiver.
17. A system according to any of the preceding claims, wherein the processing means is further adapted to generate one or more additional outputs in response to signals received from the first/second sensing means.
18. A system according to claim 17, wherein the one or more additional outputs comprise an audible output.
19. A system according to claim 17 or 18, wherein the one or more additional outputs comprise a visible output.
20. A system according to claim 19, wherein the one or more additional outputs comprise a streamed video sequence.
21. A system according to claim 19 or 20, wherein the one or more additional outputs comprise a textual output to be displayed on the respective display means.
22. A system according to any of claims 1921, wherein the one or more additional outputs comprise a graphical output to be displayed on the respective display means.
23. A system according to any of the preceding claims, said system being or forming part of a board game.
24. A method for generating virtual three dimensional projections in head mounted display means, the method comprising the steps of: providing a plurality of movable physical objects, each object belonging to a subgroup, and each object being provided with an identification insignia identifying to which subgroup each object belongs, sensing the identification insignia of at least one object by means of first sensing means, and producing a corresponding first output signal, sensing the identification insignia of at least one object by means of second sensing means, and producing a corresponding second output signal, processing the first and second output signals in accordance with the sensed identification insignia, thereby generating first and second three dimensional projections, and displaying the first three dimensional projection at a first display means of the head mounted display means, and displaying the second three dimensional projection at a second display means of the head mounted display means.
25. A method according to claim 24, further comprising the steps of: sensing a relative position between two objects by means of the first and/or second sensing means, in case the two objects are positioned at a predefined relative position, generating at least one three dimensional animated sequence, displaying the three dimensional animated sequence(s) at the first and/or second display means.
26. A method according to claim 25, wherein each object comprises a modifiable set of attributes, the method further comprising the steps of: generating an input signal, and modifying the set of attributes of an object in response to the input signal.
27. A method according to claim 26, wherein the step of generating an input signal is performed on the basis of a three dimensional animated sequence.
28. A method according to claim 26 or 27, further comprising the step of storing the modified set of attributes, the modified set of attributes thereby replacing a previous set of attributes for a later purpose.
29. A method according to any of claims 2428, further comprising the step of generating one or more additional outputs in response to signals produced by the first/second sensing means.
30. A method according to claim 29, wherein the step of generating one or more additional outputs comprises generating an audible output.
31. A method according to claim 29 or 30, wherein the step of generating one or more additional outputs comprises generating a visible output.
32. A method according to claim 31, wherein the step of generating one or more additional outputs comprises generating and streaming a video sequence.
33. A method according to claim 31 or 32, wherein the step of generating one or more additional outputs comprises generating a textual output, the method further comprising the step of displaying said textual output at the respective display means.
34. A method according to any of claims 3133, wherein the step of generating one or more additional outputs comprises generating a graphical output, the method further comprising the step of displaying said graphical output at the respective display means.
35. A computer program for performing the steps of the method according to any of claims 2434 when running on a computer device.
Description:
A SYSTEM FOR GENERATING VIRTUAL THREE DIMENSIONAL PROJECTIONS

FIELD OF THE INVENTION

The present invention relates to a system for generating virtual three dimensional projections in a head mounted display. More particularly the present invention relates to a system where virtual three dimensional projections may be displayed in such a way that they appear to be positioned on top of a physical object.

BACKGROUND OF THE INVENTION

Systems for generating virtual three dimensional projections are known in the prior art. For example, a book has been manufactured, where the text was supplemented with a recognisable mark. When the relevant page of the book is turned, a sensor will recognise the mark and send corresponding information to a processor. The processor then generates a three dimensional projection on the basis of the information, and the generated projection is displayed to a user. Thereby the book, to the user, appears to have three dimensional illustrations. Please, cf. Billinghurst, Mark et al.: (2001) "The MagicBook - Moving Seamlessly between Reality and Virtuality", Computer Graphics and Applications, 21(3), pp 2-4.

Furthermore, US 6,761,634 discloses an interactive game including a playing surface defining playing positions, a display operable to display a two dimensional visual game image on the playing surface, a sensor operable to optically detect a playing piece placed on the playing surface, and a game controller connected to the sensor and the display. The game controller creates the game image on the display and alters the game image in response to a location or orientation of the playing piece on the playing surface. The game does not include a head mounted display.

None of the prior art systems described above are suitable for providing three dimensional projections individually for two or more users at the same time, in particular when the two or more users are positioned at spatially separated positions.

SUMMARY OF THE INVENTION

Thus, it is an object of the present invention to provide a system for generating virtual three dimensional projections, where the system allows two or more users to use the system simultaneously, while taking the spatial position of each user into consideration. It is a further object of the present invention to provide a system for generating virtual three dimensional projections in such a way that the projections to a user appear to form part of physical items or a two dimensional image.

It is an even further object of the present invention to provide a system for generating virtual three dimensional projections, where the system is suitable for use in a physical board game.

According to a first aspect of the present invention the above and other objects are fulfilled by providing a system for generating virtual three dimensional projections in head mounted display means, the system comprising:

- a plurality of movable physical objects, each object belonging to a subgroup of objects, and each object being provided with an identification insignia identifying to which subgroup each object belongs,

- first and second sensing means adapted to sense the identification insignia on the objects and to produce an output signal in accordance with a sensed identification insignia, the first sensing means being located in a first position, and the second sensing means being located in a second position, the first and second positions being at different spatial locations,

- the head mounted display means comprising first display means adapted to display a projection generated on the basis of a signal produced by the first sensing means, the first display means being located in a third position,

- the head mounted display means comprising second display means adapted to display a projection generated on the basis of a signal produced by the second sensing means, the second display means being located at a fourth position, the third and fourth positions being at different spatial locations,

- processing means adapted to process signals received from the first/second sensing means in accordance with the sensed identification insignia, thereby generating a three dimensional projection, the processing means further being adapted to display said projection on the first/second display means in such a way that said three dimensional projection, to a user wearing the head mounted display means, appears to be positioned on top of the corresponding object. According to a second aspect of the present invention the above and other objects are fulfilled by providing a method for generating virtual three dimensional projections in head mounted display means, the method comprising the steps of:

- providing a plurality of movable physical objects, each object belonging to a subgroup, and each object being provided with an identification insignia identifying to which subgroup each object belongs,

- sensing the identification insignia of at least one object by means of first sensing means, and producing a corresponding first output signal,

- sensing the identification insignia of at least one object by means of second sensing means, and producing a corresponding second output signal,

- processing the first and second output signals in accordance with the sensed identification insignia, thereby generating first and second three dimensional projections, and

- displaying the first three dimensional projection at a first display means of the head mounted display means, and

- displaying the second three dimensional projection at a second display means of the head mounted display means.

It should be noted that a person skilled in the art would readily recognise that any feature described in connection with the first aspect of the invention may also be combined with the second aspect of the invention, and vice versa.

The method according to the second aspect of the invention may suitably be wholly or partly performed using a computer program running on a computer device, such as a personal computer (PC), a game device or console, a personal digital assistant (PDA), a cellular phone, or a video camera. The computer program may be running on a single of the devices mentioned above. Alternatively, it may be running on two or more interconnected devices belonging to one or more of the groups defined above and forming a computer network.

In the present context the term Virtual three dimensional projection' should be interpreted as an animation which is generated and displayed by the processing means, i.e. it is not a physical object. Furthermore, the projection appears as a three dimensional object to a person viewing the first or second display means, i.e. wearing at least part of the head mounted display means.

The system comprises a plurality of movable physical objects. Contrary to the virtual three dimensional projections, these movable physical objects are actual objects which may be physically moved. The objects may, e.g., be game pieces or other kinds of objects which it may be desirable to move relatively to each other and/or relatively to the first and/or second sensing means. Each object belongs to a subgroup of objects. In case the objects are game pieces for a board game, the subgroups may represent various types of game pieces having various properties. Since the identification insignia identify to which subgroup each object belongs, the first/second sensing means is able to establish a connection between a specific object and the relevant subgroup. This will be described in further detail below.

The first and second sensing means are located at different spatial locations. Thus, the first sensing means and the second sensing means will typically regard the objects from different points of view, e.g. from different angles, distances, etc., the outputs produced by the first sensing means and the second sensing means reflecting this difference in spatial location.

The first and second display means are also located at different spatial locations. These locations could advantageously be the locations of two different users, each wearing a head mounted device comprising the first or second display means. The output produced by the first sensing means is processed by the processing means, and the resulting three dimensional projection is displayed at the first display means. Similarly, the three dimensional projections resulting from the processing of the output produced by the second sensing means is displayed at the second display means. Thus, the sensing means and the display means are coupled Mn pairs', thereby providing individual three dimensional projections to each display means.

The processing means may be or form part of a computer device, such as a personal computer (PC), a laptop computer, a personal digital assistant (PDA), etc. Alternatively it may be or form part of a game controller or console, a cellular phone, a video camera, or any other suitable kind of device having sufficient processing power.

When the processing means receives signals from the first and/or second sensing means it processes the signals and generates three dimensional projections. Since the signals produced by the sensing means are in accordance with sensed identification insignia, the resulting three dimensional projections will also be in accordance with the sensed identification insignia. This is preferably done in the following way. When a sensing means senses an identification insignia of an object it 'recognises' the identification insignia and produces an output signal in accordance with the sensed and recognised identification insignia. Since the identification insignia identifies to which subgroup the object belongs, the output signal also indicates this. The output signal is subsequently received by the processing means which thereby 'knows' to which subgroup the object in question belongs. Based on this knowledge the processing means is able to generate a three dimensional projection which is in accordance with the subgroup. This processing may be performed solely on the basis of the received signal, but the processing means may further rely on additional information about the subgroups. Such information may be stored in a storage means, e.g. in the form of a database, to which the processing means has access. Once the processing means 'knows' to which subgroup the object belongs, it may retrieve the needed information in the storage means and subsequently generate the three dimensional projection. The stored information may simply comprise directions regarding generation of a three dimensional projection corresponding to the identified subgroup. However, it may additionally comprise information which is specific for that particular object, e.g. dynamical properties which may be modified over time, the modifications being stored in the storage means. This will be described further below. In case the objects are game pieces and the subgroups represent various types of game pieces, the three dimensional projection will typically reflect the type of game piece, thereby visually indicating specific properties of the game piece in question to a user. Furthermore, the subgroups may represent individual features of the game pieces, such as additional strength or skills acquired through interaction with other game pieces. This will be described in further detail below.

Furthermore, the processing means and the sensing means are able to keep track of the position of the object in such a way that the three dimensional projection, when displayed at the relevant display means, to a user wearing the head mounted display means, will appear to be positioned on top of the corresponding object. Thereby the resulting three dimensional projection will appear as a part of the physical surroundings.

Thus, the system is adapted to handle inputs from at least two sensing means located at different spatial locations, to process these inputs in order to produce corresponding three dimensional projections, and to display these projections at corresponding display means, also positioned at different spatial locations. This opens the possibility of providing individual projections to two or more users of the system, the projections reflecting the spatial locations of the corresponding sensing means. Thereby the system is very suitable for use in a physical board game with two or more players, each player being associated with a sensing means and a corresponding display means. Therefore each player receives individual three dimensional projections being generated on the basis of the output signal produced by the corresponding sensing means. If a sensing means is located at least approximately in the same location as the associated player, the three dimensional projections may reflect the view which the player has of the physical part of the board game, and it is therefore possible to generate the three dimensional projections in such a way that they appear as forming a part of the physical board game, e.g. in terms of position, size, orientation, etc., thereby introducing augmented reality in the board game. This will be the case for all the players since they all receive individual three dimensional projections. This may also be useful in other applications than physical board games. For example, when presenting three dimensional models to two or more viewers being positioned around a physical model being modified by means of the three dimensional projections, it may be useful to be able to present the models individually to each viewer. Introducing augmented reality in these situations results in more exciting applications. In case of board games, the nice animated figurines normally present in computer games can in this manner be combined with the interactivity and social facets of an ordinary board game. The board game is thereby provided with extra excitement and illustration without loosing the social side of the traditional board game.

The first and second sensing means may further be adapted to sense a relative position between two objects, and the processing means may further be adapted to generate at least one three dimensional animated sequence when two objects are positioned at a predefined relative position. In this case the first and second display means may be adapted to display said three dimensional animated sequence(s) in such a way that it/they appear(s) to be positioned on top of the corresponding objects.

In case the system is or forms part of a board game and the objects are game pieces, the predefined relative position is preferably one in which the rules of the game define that the two game pieces should interact. This interaction is illustrated by means of the three dimensional animated sequence, and to the players it appears that the interaction between the game pieces takes place at the positions of the relevant game pieces. Furthermore, the game pieces are preferably moved into the predefined relative position by one or more players physically moving one or more game pieces. Thereby the physical presence of the players in the game which is characteristic for traditional board games is maintained.

The predefined relative position between two objects may be defined by a distance between said objects being smaller than a predefined distance. It may, e.g. be when two objects abut each other or when they are positioned in such a way that they are at least partly overlapping. It may alternatively be when two objects are moved within a certain radius of each other. Alternatively, the predefined relative position may be defined by a distance between the objects exceeding a predefined distance, a mutual angular position of the two objects, a mutual orientation of the two objects and/or any other suitable kind of relative position of the objects. In case the system is or forms part of a board game, one or more of the objects may be adapted to indicate which of the players currently has the move, i.e. whose turn it is. In one application where the board game has two players, this function may be provided by a single object having one marker associated with one player printed on a first side of the object, and another marker associated with the other player printed on an opposite side. The system may thereby be able to detect which player has the move, simply by detecting the marker on this object. It is thereby indicated which objects should be regarded as 'attackers' and which objects should be regarded as 'defenders'. When a player has finished his/her move, the object is simply inverted, thereby revealing the other marker.

Each object may comprise a set of attributes which is modifiable in response to an input supplied to the processing means. The set of attributes may e.g. comprise attributes relating to characteristics of the specific object. In case the objects are game pieces the set of attributes may, thus, comprise information relating to strength, skills, accessories, etc. of a game piece which the object represents. Such attributes may be modified over time. The input supplied to the processing means may be a manual input, such as a user keying in new values for one or more attributes. Alternatively or additionally, the input may be supplied by means of additional objects which, when positioned at or near another object, provides it with additional attributes. In case the system is or forms part of a board game and the objects are game pieces, such additional objects may, e.g., indicate whether the game piece is attacking or defending, select a specific weapon and/or attack or defence strategy, etc. This may have an impact on the outcome of a subsequent battle between two game pieces. The additional objects will, in this case, typically not be actual game pieces themselves.

Alternatively, the input supplied to the processing means may be generated on the basis of a three dimensional animated sequence generated by the processing means. In this case, when two objects are moved to a predefined relative position the processing means generates a three dimensional animated sequence. The appearance of the sequence may, e.g., depend on the subgroups of the two objects, the relative position, the absolute position of the objects, etc. When the animated sequence has been generated an input is generated and supplied to the set of attributes which is subsequently modified in accordance with the input, i.e. in accordance with the three dimensional animated sequence. Thus, the input may be in the form of an exchange of knowledge and/or information between two objects. Thus, during an interaction as described above, an object may Mearn' certain thing from another object or from the interaction itself.

The three dimensional animated sequence(s) may be in the form of a battle between the two objects positioned at the predefined relative position, and the input may be generated on the basis of an outcome of said battle. In this case the attributes may advantageously represent various characteristics of the objects, such as strength, skills, accessories such as weapons, etc. The outcome of the battle may result in one or both of the objects loosing, acquiring or improving any of these characteristics. The sets of attributes of the objects should therefore be modified accordingly for the purpose of future battles. Such a modification of the sets of attributes may influence the appearance of future three dimensional projections generated in accordance with the objects in question, and/or it may influence the outcome in future battles where one of the objects is involved. The system may be capable of keeping track of who has been hit by whom, where and by which weapon.

Thus, the system may further comprise storage means for storing a modified set of attributes, the modified set of attributes thereby replacing a previous set of attributes for a later purpose. As described above, this may be useful in case the system is or forms part of a board game. However, it may also be useful in other situations, such as when architects brain storm about a new building, and where previous sketches and knowledge from previous projects may be used in creating new proposals.

Furthermore, inputs as described above, in particular input supplied by additional objects, may be used for changing the course of a game, e.g. the rules, rather than for modifying the attributes.

The head mounted display means may comprise is a pair of goggles, preferably a pair of transparent goggles. In this case a user wearing the goggles will be able to see the physical parts of the system, including the objects. Thus, a user wearing the transparent goggles will see the physical part of the system as well as the virtual three dimensional projections corresponding to the objects which are visible to the user. These projections will appear to be positioned on top of the corresponding objects. Thus, the user experiences an augmented reality. The head mounted display preferably comprises a pair of goggles for each display means. In this case each user can wear a pair of goggles and thereby receive an individual visual input corresponding to the associated sensing means. The goggles may in this case be or form part of the first/second display means.

Alternatively, the head mounted display may comprise one or more non-transparent head mounted devices, such as a virtual reality (VR) helmet. In this case a two dimensional image illustrating the physical parts of the system is preferably displayed in the helmet, and the three dimensional projections are projected on this two dimensional image.

The first position may be at least substantially spatially coinciding with the third position and/or the second position may be at least substantially spatially coinciding with the fourth position. In this case the first sensing means is located approximately at the position of the first display means and/or the second sensing means is located approximately at the position of the second display means. Thereby the View' of a sensing means will at least approximately coincide with the View' of the corresponding display means. This is particularly advantageous if the head mounted device comprises transparent goggles, because it facilitates the process of positioning the three dimensional projections in such a way that • they appear to be on top of the corresponding physical objects which are directly visible to the user.

Thus, the first sensing means may be mounted on or form part of the first display means, and/or the second sensing means may be mounted on or form part of the second display means.

The system may further comprise one or more additional sensing means and one or more additional corresponding display means. In this embodiment additional 'pairs' of sensing means/display means are present in the system, thereby allowing three or more users at the same time.

The system may further comprise a delimited area within which the objects are positioned. The area may be physically and/or visually defined, e.g. in the form of a demarcation or a game board. Alternatively or additionally, the delimited area may be defined virtually, e.g. forming part of the projections displayed at the display means. It may also simply be an area within which the sensing means are capable of sensing the identification insignia of the objects, and in this case only objects being within the delimited area can have three dimensional projections appearing thereon.

The number of subgroups may be smaller than the number of objects, in which case at least one subgroup has two or more objects. Alternatively, the number of subgroups may be equal to the number of objects, in which case the subgroups uniquely defines the objects. Alternatively, the number of subgroups may be larger than the number of objects, in which case not all possible subgroups will be represented by the objects.

The identification insignia may comprise visible marks, and the first and/or second sensing means may in this case comprise optical sensing means. Such optical sensing means may comprise a camera. The visible marks may be recognisable patterns present on the objects and/or specific colouring. Alternatively, the visible marks may comprise light emitting devices arranged in a specific pattern and/or emitting light having specific colours.

Alternatively or additionally the identification insignia may comprise Radio Frequency Identification (RFID) tags, and the first and/or second sensing means may comprise a radio frequency receiver. Alternatively or additionally, the identification insignia may be of any other suitable kind, as long as the sensing means is adapted to perceive the kind of signal the identification insignia 'emits', whether it is visible, audible, infrared, radio frequency and/or any other kind of signal, i.e. the sensing means must be of a kind which corresponds to the kind of identification insignia.

The processing means may further be adapted to generate one or more additional outputs in response to signals received from the first/second sensing means. The one or more additional outputs may, e.g., comprise an audible output, a visible output, such as a streamed video sequence, a textual output, and/or a graphical output to be displayed on the respective display means. Audible outputs may comprise sounds relating to and/or supporting the three dimensional projections and/or three dimensional animated sequences. Thus, in case a three dimensional animated sequence illustrates a battle scene, this scene may advantageously be supplemented by battling sounds in accordance with the movements shown in the sequence. Streamed video sequences may be narrative sequences giving additional background information, or transitional passages, e.g. when moving from one level or scenery to another.

In a preferred embodiment the system is or forms part of a board game. As described above, the objects in this case are preferably movable game pieces, and the subgroups represent various types of game pieces, e.g. having various strength, powers, skills, etc., e.g. as known from traditional role playing.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will now be further described with reference to the accompanying drawings in which:

Fig. 1 is a front view of a pair of transparent goggles for use in a system according to an embodiment of the invention,

Fig. 2 is a view of the goggles of Fig. 1 from a reverse angle,

Fig. 3 is a diagram illustrating a system and method according to an embodiment of the invention,

Fig. 4 shows examples of movable physical objects in the form of game pieces for a board game, and Fig. 5 shows a system according to an embodiment of the invention in the form of a board game involving two players.

DETAILED DESCRIPTION OF THE DRAWINGS

Fig. 1 is a front view of a pair of transparent goggles 1 for use in a system according to an embodiment of the invention. The goggles 1 comprise transparent parts 2 allowing a user wearing the goggles 1 to see through them. The goggles 1 are further provided with a camera 3 arranged between the transparent parts 2 in a position approximately corresponding to the forehead of a person wearing the goggles 1. Thereby the 'view' of the camera 3 is at least substantially identical to the view of the person wearing the goggles 1. In particular, the camera 3 will follow the movements of the person's head.

Fig. 2 shows the goggles 1 of Fig. 1, but from a reverse angle, i.e. the back part of the goggles 1 is visible. The shown side of the transparent parts 2 functions as displays 4. Thus, a three dimensional projection can be projected onto the displays 4, and it will thereby be visible to a person wearing the goggles 1. The camera 3 is capable of 'seeing' movable physical objects (not shown) with visible markers thereon. When it has recognised the marker on an object, the camera 3 sends an output signal to a processor (not shown) indicating which marker the camera 3 has recognised. On the basis of this signal the processor generates a three dimensional projection and displays this projection at the displays 4 in such a way that the projection, to the user, appears to be positioned on top of the corresponding object. The arrangement of the camera 3 on the front part of the goggles 1 facilitates this process to a great extent.

Fig. 3 is a diagram illustrating a system and method according to an embodiment of the invention. The system comprises a number of goggles 1 having a camera 3 arranged thereon, preferably of the kind shown in Figs. 1 and 2. In the Figure three pairs of goggles 1 are visible, but it should be noted that fewer or more goggles 1 could be present in the system. The number of goggles 1 determines the number of users which can use the system at the same time.

The camera 3 of one of the goggles 1 "sees' an object 5. The Figure illustrates that one camera 3 'sees' the object 5, but it should be understood that more, and even all, of the cameras 3 could 'see' this object 5 or other objects present in the system. When a camera 3 has captured an object 5 it sends a signal to a pattern recognition unit 6 which is capable of determining which pattern is present on the captured object 5, i.e. to which subgroup the object 5 belongs. There is a 1:1 correspondence between the patterns on the objects 5 and files 7 containing digital representations of the physical objects 5. In the Figure two different objects 5 are illustrated, but it should be understood that additional types of objects 5 will typically be present in the system. The files 7 comprise or have access to information 8 relating to the patterns. The pattern recognition unit 6 has access to the files 7, and on the basis of this, it is capable of recognising a given pattern, thereby determining to which subgroup the object 5 in question belongs. The pattern recognition unit 6 sends this information to a processing unit 9 which processes the information and generates a three dimensional projection. This projection is sent to the goggles 1 which initially λsaw' the object 5 and displayed there. As a result the person wearing the goggles 1 will see the physical object 5 and the three dimensional projection on top of the object 5.

The processing unit 9 may further be capable of analysing internal relations between the objects 5, such as relative positions of the objects 5, based upon the information received from the pattern recognition unit 6. On the basis of this analysis the processing unit 9 updates the information 8 relating to the patterns.

Fig. 4 shows various examples of movable physical objects 5 in the form of game pieces for a board game. Each object 5 is provided with a pattern indicating which kind of game piece the object 5 is, and to which player the game piece belongs. Thus, all game pieces belonging to a specific player are marked with a star, and all game pieces belonging to another player are marked with a crescent moon. Furthermore, each of the objects 5 has a figure printed thereon indicating the kind of game piece, and thereby specific characteristics of the object 5, e.g. in terms of skills, strength, etc.

The markers on the objects 5 are visible. This allows the players to readily determine which kind of game piece each object 5 is. Furthermore, it allows the camera 3 positioned at the goggles 1 worn by the players to capture and recognise the markers, thereby initiating the process described above in connection with Fig. 3.

Fig. 5 shows a system according to an embodiment of the invention in the form of a board game involving two players 10, the Figure illustrating what the players 10 see. Each player 10 is wearing a pair of goggles 1, e.g. of the kind shown in Figs. 1 and 2, and the goggles 1 are connected to a processing unit 9 as described above. The players 10 are situated around a table 11, upon which a game board 12 is positioned. The players 10 place their respective game pieces (not shown) on the game board 12. When the cameras (not shown) arranged on the goggles 1 capture the markers on the game pieces, the processing means generates corresponding three dimensional projections 13 which are displayed at the goggles 1 worn by the players 10, in such a way that the projections 13 appear to be positioned on top of the corresponding game pieces as illustrated. When the game pieces from the two players 10 are moved into the vicinity of each other, optionally when they are positioned in the same field of the game board 12, a battle may take place between the game pieces. This battle scene will appear as a three dimensional animated sequence which may also illustrate which of the game pieces wins the battle. This may lead to modifications in attributes associated with the game pieces involved, and the processing means may update its information relating to the game pieces accordingly.