Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
CAPTURING, STORING AND INDIVIDUALIZING IMAGES
Document Type and Number:
WIPO Patent Application WO/2010/006387
Kind Code:
A2
Abstract:
The present Invention relates to capturing, storing and individualizing images. At different locations, a plurality of sub-sequences of images are captured and stored of an individual. The sub-sequences are combined into a sequence on the basis of particular track information, used at each location for identifying which sub-sequences relate to the individual. Each sequence is labelled with a particular identifier in order create the possibility to combine the sequences from different locations into a single video. Possibility to add virtual images to the stored sequences.

Inventors:
LORREZ WESLEY (BE)
Application Number:
PCT/BE2009/000040
Publication Date:
January 21, 2010
Filing Date:
July 16, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VISIONWARE B V B A (BE)
LORREZ WESLEY (BE)
International Classes:
G06F17/30; G06K9/00
Domestic Patent References:
WO2008127598A12008-10-23
WO2004072897A22004-08-26
WO2007036838A12007-04-05
Foreign References:
EP1978524A22008-10-08
Other References:
BRUNELLI R ET AL: "FACE RECOGNITION: FEATURES VERSUS TEMPLATES" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, no. 10, 1 October 1993 (1993-10-01), pages 1042-1052, XP000403523 ISSN: 0162-8828
PANTIC M ET AL: "Automatic analysis of facial expressions: The state of the art" IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 22, no. 12, 1 December 2000 (2000-12-01), pages 1424-1445, XP002276537 ISSN: 0162-8828
Attorney, Agent or Firm:
MARCHAU, Michel (Oostkamp, BE)
Download PDF:
Claims:
Claims

1. A method for producing a video product regarding an individual, the video product being based on at least one video sequence, each video sequence comprising one or more sub-sequences of images, the method comprising:

- taking a series of images of the individual and converting the images into first video signals, the series of images constituting a sub-sequence

- processing the first video signals by at least one biometric video processor in order to determine a first biometric template corresponding to a face of the individual, the face having a first expression;

- deriving a first biometric ID corresponding to said first biometric template;

- labelling the video sequence with the defined first biometric ID

- storing the labelled video sequence in a storing means.

2- The method for producing a video product regarding an individual according to claim 1, whereby the processing comprises:

- selecting from said series of images at least a first group of images based on certain criteria;

- determining for each of the images of the at least first group of images a biometric sub-template corresponding to the face of the individual and obtaining in this way a plurality of biometric sub-templates

- deriving from this plurality of biometric sub-templates the determined first biometric template.

3. The method for producing a video product regarding an individual according to anyone of claims 1 or 2, the method further comprising:

- repeating the steps of claim 1 and claim 2 whereby

- a second group of images is selected from said series of images or is selected from another series of images, whereby

- a second biometric template corresponding to a face of the individual is determined, the face having a second expression; whereby

- a second biometric ID is derived, corresponding to said second biometric template; whereby

- the corresponding video sequence is labelled with the derived second biometric ID and whereby

- the labelled corresponding video sequence is stored in a storing means.

4. The method for producing a video product regarding an individual according to anyone of claims 1 to 3r further comprising: storing a mutual link between the first biometric ID and the second biometric ID whereby video sequences labelled with the first biometric ID can be retrieved by using the second biometric ID and vice versa.

S- The method for producing a video product regarding an individual according to claim 1, the method further comprising:

- taking one or more images of the individual and converting the one or more images into second video signals;

- deriving the biometric ID of the individual from said second video signals and

- retrieving from the storing means the labelled video sequences, labelled with the derived biometric ID of the individual.

6. The method according to any of the previous claims, whβFβby video sequences are analyzed and given a score, based on that analysis.

7. The method of claim 6, whereby video sequences are selected on the basis on the given score.

8. The method according to any of the previous claims, whereby sequences are further biometric processed on the basis of facial expression or on the basis of attitude of the individual.

9. The method according to any of the previous claims, whereby virtual content is selected and inserted in the sequences based on the results of the biometric processing.

10. A video product, obtained by applying the method of any previous claim.

11. A computer software product, which, when loaded on a computer, makes the computer execute the method according to any of the claims 1 to 9.

12. A system for producing a video product on an individual moving around in one or more locations, the system comprising:

- at least one video capturing and recording system (1GO12GQ)f the at least one video capturing and recording system comprising:

- at least one camera, taking images of the individual and converting the images in first video signals;

- a biometric video processor for processing the first video signals in order to determine a first biometric template corresponding to a face of the individual, the face having a first expression;

- a circuit for deriving a first biometric ID corresponding to said first biometric template;

- a circuit for labelling the first video signals with the derived first biometric ID

- a storing system for storing the labelled first video signals.

13. The system for producing a video product according to claim 12r the system further comprising:

- one or more cameras for taking one or more images of the individual and converting the one or more images into second video signals;

- a biometric video processor for deriving the biometric ID of the individual from said second video signals and

- a selector for retrieving from the storing means the labelled video sequences, labelled with the derived biometric ID.

14, The system according to any of the claims 12 or 13r comprising:

- an analyzer for analyzing the sequences of the first video signals and - a circuit for giving a score to the sequences of the first video signals, based on that analysis.

15. The system according to claim 14, comprising:

- a circuit for selecting sequences of the first video signals based the given score.

16. The system according to any of claims 12 to 15r comprising:

- a biometrie processor for processing the sequences of the first video signals on the basis of facial expression or on the basis of attitude of the individual

17. The system according to claim 16, further comprising:

- means for selecting virtual content on the basis of the results of the biometric processing

- means for inserting the selected virtual content in the sequences of the first video signals.

Description:
Visionware

CAPTURING, STORING AND INDIVIDUALIZING IMAGES

Technical field of the invention

The present invention relates generally to the field of capturing images and personalizing the captured images. More specifically, the present invention relates to the field of image capturing in places like entertainment or holiday parks, theme parks, places for particular games,, etc. but also venues, parties where a lot of people come together and where there may exist a demand for making images and image sequences whereby the captured images and image sequences are to be linked to one or more individuals. The present invention relates also to adding of images of any kind, real or virtual, to a given image sequence. The present invention is particularly, but not exclusively, useful for all kinds of applications of image capturing in public places and in other places, visited by a larger number of people.

Background of the invention

Visitors to entertainment and holiday parks, bowlings, fairs, parties, etc. often want to get a materialized souvenir of their visit like photos t films, etc. But it is not always easy to make photos or films when the visitor himself is participating in a particular attraction or game or when he is involved in a conversation with another person. The visitor has then to find a third person willing to help him, but this person is often not familiar with the photographing or filming apparatus of the visitor. Asking professionals for taking photos or shooting films is also rather an expensive solution. There have been a number of proposals to find other solutions to this problem e.g. by installing in the attraction area a number of automatic cameras, shooting films of the visitors. In order to make the necessary link between a given film sequence and one or more individual visitors, it has already been proposed to provide each visitor with an identification card or an RFlD card. The visitor can introduce his identification card in a reader or the card may be detected e.g. by an RFID-reader, installed e.g. in the neighbourhood of a particular attraction. One or more cameras, positioned close to that attraction, start shooting one or more film sequences of the visitors enjoying the attraction. The film sequences are provided with an identification signal, corresponding to the information on the identification card so that film sequences from the same attraction or from different attractions can be linked with a particular visitor. The images of one or more film sequences, with the same identification, may then be joined together into a single video, which can be obtained by the visitor concerned. Such a system is disclosed in e.g. US 5655053. In the prior art, other or similar systems providing visitors of an entertainment park with videos can be found.

Document WO 20Q8/127598 (YQURDAY INC) discloses a system for capturing and managing personalized video images in order to create video products for customers, visiting a theme park. Portions of recorded video output of a camera are associated with one or more customer identifiers, the identifiers being identified from RFID devices, carried by the customers.

Document US 5655053 (RICHARD L. RENIE) discloses a personalized video system of an individual consumer as shot at an amusement park having different attractions. The system includes cameras for generating digital video signals of the consumer at several locations on the attraction which are marked by an identification processor with information identifying the particular attractions with the particular consumer and stored in memory. The identifying information and information as to which of several attractions have been frequented by an individual are input by means of a card to a controller. Upon receipt of signals indicating the consumer wishes to receive a completed video, the controller generates command signals for a mύeo assembler to create the final video product inserting the personalized video signals as appropriate in a standard, preshotfiim.

Document WO 20Q4/072897 (CENTERFRAME LLC.) discloses a method for collecting photos of a patron in an entertainment venue using facial recognition of the patron's face within the photos. In order to enhance the reliability of the facial recognition system, information about the patron that is not directly related to most facial recognition systems, including clothes, height, other associated people, use of glasses and jewelry, disposition of facial hair, and more, can be used. Some of the characteristics used can be specific to a particular date or event, and which will not be more generally characteristic of the patron. The facial recognition system can also be used to identify the patron requesting photos to be collected. However, such systems present a number of inconveniences- From the entry in the park, the visitor has to show his interest, one way or another, in a video or photos of his visit and he has to ask for an identification or an RFlD card or an initial photo is made of the visitor. The visitor has to present his identification card at each attraction of which he wants a video sequence or, in the case of an RFlD card, the card is read automatically. The identification or RFID card can be lost or the visitor can forget to introduce the card in a card reader. In such systems however, the effect of surprise is excluded because the visitor knows that a film of his visit wilt be made.

A first possibility of improving the system consists in an automatically filming of all visitors at different locations of the park and piMrtg the different film sequences containing images of a particular visitor together by using facial recognition of that visitor. Using such a technique makes it possible to obtain a video of the visit to the park relating to a particular visitor. Such a system presents however several drawbacks. Generally, an image is still made at the entrance which will be used as a general reference for retrieving image sequences made of the visitor during his visit When shooting the images sequences during the visit itself, the visitor has not always his face directed to the camera or the general geometry of his face may be changing due to emotions of the visitor at the moment the image is taken. Therefore, using facial recognition in this way may be not always very reliably and interesting image sequences may be lost. On top of that, persons visiting an entertainment park or assisting a party have very often a change in their facial geometry because they are enjoying an event. Generally, different video sequences relating to the same person, are taken at different locations and/or at different times, and these different video sequences are to be put together. However, when there is an important difference in the face expression of the person concerned,- at a given moment he is laughing * another moment he is surprised, still another moment he is discussing seriously,- linking the different video sequences together may be problematic. As a consequence, it will be difficult to link some video sequences relating to the same person and interesting video sequences may not be retrieved when assembling the final video product.

Further it may also be desirable to make use of computer vision algorithms e.g. for inserting other images within the video sequences, taken during the visit of an entertainment park or of another venue. Indeed, there is an increasing need to provide such videos with stronger personal characteristics and also with virtual images, the content of which may be influenced by the principal character or characters of the video. Such virtual images may relate to the themes of the visited park or to certain typical figures of the park. Very generally, there is a need to create the possibility to persons or other living beings and even objects to influence, mostly without their knowledge, virtual images which will be inserted into a video containing images of these persons or living beings or objects.

Summary of the invention ft is an object of the present invention to provide a method and a system wherein the drawbacks and inconveniences, mentioned above, are eliminated and the objectives and needs, mentioned above, are met. it is also an object of the present invention to provide in a method and a system for producing a video by using more than one identification of an individual, one identification being used for the whole video and a special identification being used at the different locations of capturing parts of the video, ft is also an object of the present invention to provide a method and a system whereby a person or another living being or an object can Influence or determine the content of the images which are added to a video of that person or living being or object

The above objective is accomplished by a method and system according to the present invention.

The present invention provides a method for producing a video product regarding an individual. The video product is based on at least one video sequence and each video sequence comprises one or more sub-sequences of images. The method comprises taking a series of images of the individual and converting the images into first video signals, the series of images constituting a sub-sequence, processing the first video signals by at least one biometric video processor in order to determine a first biometric template corresponding to a face of the individual, the face having a first expression, deriving a first biometric ID corresponding to said first biometric template, iiabeliing the video sequence with the defined first biometric ID and storing the labelled video sequence in a storing means.

The processing may comprise selecting from said series of images at least a first group of images based on certain criteria, determining for each of the images of the at least first group of images a biometric sub-template corresponding to the face of the individual and obtaining in this way a plurality of biometric sub-templates and deriving from this plurality of biometric sub-templates the determined first biometric template.

The method may further comprise repeating the steps of the method above whereby a second group of images is selected from said series of images or is selected from another series of images, whereby a second biometric template corresponding to a face of the individual is determined, the face having a second expression; whereby a second biometric ID is derived, corresponding to said second biometric template; whereby the corresponding video sequence is labelled with the derived second biometric ID and whereby the labelled corresponding video sequence is stored in a storing means.

The method may further comprise storing a mutual link between the first biometric ID and the second biometric ID whereby video sequences labelled with the first biometric ID can be retrieved by using the second biometric ID and vice versa,

The method may further comprise taking one or more images of the individual and converting the one or more images into second video signals, deriving the biometric

ID of the individual from said second video signals and retrieving from the storing means the labelled video sequences, labelled with the derived biometric t D of the individual.

The video sequences may be analyzed and given a score, based on that analysis.

Video sequences may be selected on the basis of the given score.

Sequences may be further biometric processed on the basis of facial expression or on the basis of attitude of the individual.

Virtual content may be selected and inserted in the sequences based on the results of the biometric processing.

The present invention provides also a video product, obtained by applying the method described above.

The present invention provides also a computer software product, which, when loaded on a computer, makes the computer execute the method described above.

The present invention provides further a system for producing a video product on an individual moving around in one or more locations. The system comprises at least one video capturing and recording system (100, 200). The at least one video capturing and recording system comprises at least one camera, taking images of the individual and converting the images in first video signals, a biometric video processor for processing the first video signals in order to determine a first biometric template corresponding to a face of the individual, the face having a first expression, a circuit for deriving a first biometric JD corresponding to said first biometric template, a circuit for labelling the first video signals with the derived first biometric ID and a storing system for storing the labelled first video signals.

The system may further comprise one or more cameras for taking one or more images of the individual and converting the one or more images into second video signals, a biometric video processor for deriving the biometric ID of the individual from said second video signals and a selector for retrieving from the storing means the labelled video sequences, labelled with the derived biometric ID.

The system may further comprise an analyzer for analyzing the sequences of the first video signals and a circuit for giving a score to the sequences of the first video signals, based on that analysis.

The system may further comprise a circuit for selecting sequences of the first video signals based the given score.

The system may further comprise a biometric processor for processing the sequences of the first video signals on the basis of facial expression or on the basis of attitude of the individual.

The system may further comprise means for selecting virtual content on the basis of the results of the biometric processing and means for inserting the selected virtual content in the sequences of the first video signals.

Very general, the present invention provides also a method for adding virtual content to images of a video sequence. The method comprises the steps of detecting a particular information in the images of an individual or an object contained in the video sequence and, when such a particular information is detected, adding the virtual content to the images of the video sequence.

According to the method, the added virtual content is determined by the content of the image.

The particular information in the image may be constituted by a change in the expression of a face.

The particular information in the image of the individual may be constituted by a particular movement of the Individual. in the case of an object the particular information may be constituted by the way the object is seen in the image, by the form of the object or the movement of the object.

The virtual content may comprise the image of a personage.

The method may further comprise modifying the content of the virtual content, this modifying being determined by detecting further particular information in the images contained in the video sequence.

Modifying the virtual content may comprise modifying the expression on the face of the personage.

Modifying the virtual content may comprise inducing a movement of the personage.

The present invention provides also a DVD comprising a video reportage of an event whereby the video reportage is obtained according to any of the methods above.

The present invention provides also a computer program product, directly loadable into the memory of a computer, comprising software program portions for executing any of the methods above, when said computer program product is run on a computer.

The present invention provides also a video system for adding virtual content to images of a video sequence. The video system comprises detecting means for detecting a particular information in the image of an individual or an object contained in the video sequence and adding means for, when such a particular information is detected, adding virtual content to the images of the video sequence.

The detecting circuit may comprise means for facial recognition.

The video system may comprise means for determining the virtual content on the basis of the detected particular information.

The video system may comprise means for modifying the virtual content based on the detection of further particular information in the images of an individual contained in the video sequence.

The present invention is not limited to the combination of the different features and characteristics, described m this summary of the invention, but other combinations and even features and characteristics on their own are also object of the present invention. Description of illustrative embodiments

The invention is further described by way of examples with reference to the accompanying drawings wherein

Figure 1 represents schematically examples of a video capturing and recording system, installed in the neighbourhood of an attraction in an entertainment park; Figure 2 represents schematically a video assembling system, normally installed in the neighbourhood of the exit of an entertainment park- Figure 3 represents schematically an example of how images, called further virtual content, may be prepared.

Figure 4 represents schematically an example of the way virtual content is selected and added to other images, e.g. the images obtained by the video capturing and recording system of Figure 1.

Figure 5 represents schematically an example of the start of the selecting a group of Images from the images of a sub-sequence.

The present invention will be described with respect to particular embodiments and with reference to the drawings but the invention is not limited thereto but only by the claims. The drawings described are only schematic and are non-limiting.

Furthermore, the terms first, second, third and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a sequence, either temporally, spatially, in ranking or in any other manner, ft is to be understood that the terms so used are interchangeable under appropriate circumstances and that the embodiments of the invention described herein are capable of operation in other sequences than described or illustrated herein.

It is to be noticed that the term "comprising", used in the claims, should not be interpreted as being restricted to the means listed thereafter; it does not exclude other elements or steps. It is thus to be interpreted as specifying the presence of the stated features, integers, steps or components as referred to, but does not preclude the presence or addition of one or more other features, integers, steps or components, or groups thereof. Thus, the scope of the expression "a device comprising means A and B" should not be limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B. Similarly, it is to be noticed that the term "coupled", also used in the claims, should not be interpreted as being restricted to direct connections oniy. The terms "coupled" and "connected", along with their derivatives, may be used, but it should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression "a device A coupled to a device B" should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Coupled" may mean that two or more elements are either in direct physical or electrical contact or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other

Reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.

Similarly it should be appreciated that in the description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.

Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those in the art For example, in the following claims, any of the claimed embodiments can be used in any combination.

Furthermore, some of the embodiments are described herein as a method or combination of steps of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or the element of a method. Furthermore, an element of an apparatus embodiment described herein is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.

In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practised without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

The invention will now be described by a detailed description of several embodiments of the invention. It is clear that other embodiments of the invention can be configured according to the knowledge of persons skilled in the art without departing from the true spirit or technical teaching of the invention, the invention being limited only by the terms of the appended claims.

In the description provided herein, numerous specific details are set forth. However, it is understood that embodiments of the invention may be practised without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

Before starting the detailed description, the following definitions are given: a sub-sequence (of images): a film or video comprising images taken by a single camera; a sequence (of images): a film or video product, obtained by combining the images of one or a plurality of sub-sequences; generally, a sequence relates to a particular environment (e.g. a particular location in an entertainment park); in some cases, a sequence may comprise only the images of a single sub-sequence but normally, a sequence comprises images taken by different cameras; a video: a film or video product, obtained by combining images of different sequences; in extreme cases, a video may consist of images from one sequence or even from one sub-sequence, virtual content: any kind of images, computer generated, cartoons or real images, which, when added to other images which normally are representing the real world, create a virtual world, no longer corresponding completely to reality; an individual: one or more human beings, e,g, a family; but other living beings, like animals are also Included;

(automatic) identification: any kind of identifying an individual by using certain characteristics of the individual in particular facial characteristics, with or without emotions, other physical characteristics like height, etc., characteristics linked with the individual tike clothes, position of the individual, etc. track information: any kind of information relating to an individual, allowing an image processing system to decide whether a sub-sequence or sequence of images contain images relating to an individual The track information can e.g. be based on the position of the individual in a transport means, carousel (e.g. second row in a carriage), other information allowing to identify the individual (e.g. a red jacket, person with a dog), etc. But other characteristics, proper to an individual, may be used as track information e.g. skin-cotor tracking. The track information can also use characteristics which could be used in the identification, but the characteristic, used as track information is in principle different from the one used at that moment for the

Identification. For practical reasons, track information needs less processing time for the images than the identification process biometric video processor: is a video processor determining the biometric template corresponding to the face of an individual; the face may or may not express an emotion. biometric ID: an identification of an individual, based on the biometric template of that individual; a biometric template gives a set of characteristics of the face of an individual selected by the biometric filter; the biometric ID is generally represented by a given code.

These definitions are valid for the present description and also for the claims.

As a first example, a description will be given of a complete system in which the present invention can be applied. In particular, this example, illustrated in the drawings, relates to an attraction park. But it is clear that the invention is not limited to such an application but, as already said above, other applications are also encompassed by the invention.

Normally, in an attraction park, a theme park, etc. there are different attractions and other places of interest (like restaurants, terraces, panoramic views, etc), each of them being particular suited for video shooting. All such places and attractions are called hereafter locations. In order to create the possibility of providing visitors of the park with a video of their visit, one or more locations in the park are provided with a video capturing and recording system (in the drawing: 100, 200, etc.), whereby the number of video and capturing systems may be lower or higher than the locations of the park. A video assembling system (500) is normally situated in the neighbourhood of the exit of the park. The video capturing and monitoring systems, which are situated at different locations within the park (e.g. an attraction, a game, etc.) may have a plurality of cameras. In the drawing, each video capturing and recording system 100, 200, etc. is illustrated as one unit, the different elements of each system 100, 2QQ, etc. being directly connected one to the other. It is clear that some elements of a video capturing and recording system may be grouped with elements of another video capturing and recording system. Very often and for practical reasons, some or all of the different elements or steps foreseen in the video capturing and recording systems (e.g. analyzing, storing, labelling), except the camera itself with its conversion of images into video data, will be centralized in a central computer system whereby data are to be transmitted between the decentralized video capturing and recording systems and the central computer system. Also for practical reasons, the transmission of data between the systems at the different locations and the central computer system will be done on a wireless basis. Some of the shown elements may also be used in common by different video capturing and recording systems, e.g. the same biometric filter may be used in multiplex by a plurality of video capturing and recording systems.

A more detailed description is now given of an exemplary video capturing and recording system e.g. system 100 t which is installed around a location. The subsequence of images/video signals provided by one camera are sent to a combiner (111) combining the sub-sequences from different cameras in a single video sequence. Within one sub-sequence of images, a group of images of a visitor Is selected on the basis of certain criteria (see Fig.5) and a biαmetric video processor (121) determines for each of these images its biometric pattern, called biometric sub-template and representing some basic characteristics of the face of the visitor. Using a suited algorithm (averaging, optimizing, etc.) the biometric template of the face of the visitor is determined. It is to be noted that the biometric template corresponds to the face of the visitor, expressing some emotion (e.g. the visitor is laughing). Experience has shown that a group of some five images, selected within a short time of the sub-sequence, may be sufficient to obtain a biometric template of very reliable quality. However, in order to obtain this reliable quality, the selection of images has to based on certain criteria.

Figure 5 illustrates one way of doing such a selection. According to this example, the face to be processed should be turned to the camera ( the face has to be in a plane, perpendicular on the line connecting the center of the face with the center of the camera) and the light conditions should be sufficient. The angle between the mentioned line and the plane of the face is resolved according to two perpendicular axis: a vertical one and a horizontal one. In step "detect frontal position" the angle of the plane of the face around the vertical axis is measured and when this angle is beyond a tolerance region of X° and Y 0 , the image is not retained. The same is done in step "detect rotation" when the angle around the horizontal axis is measured and images wherein the plane of the face is beyond the tolerance area A° and B 0 are eliminated. In the next step, images with low illumination are eliminated. As said above, the number of selected images is then limited to a given number (not shown in Figure 5), e.g. five and these five images are then further analyzed or processed in order to determine the biometric pattern or sub-template of the face, contained in each of the selected images.

Finally, the biometric template of the face of the visitor is determined from these five sub-templates and a biometric ID derived corresponding to this defined biometric template. The sub-sequence may then be labelled with this biometric ID.

At the same time track information is derived from the images in the subsequence. This track information may also been based on facial recognition techniques (face tracking, colortracking, motiontracking) or it may use other techniques (e.g. tracking of particular clothes or other details like glasses, etc, tracking the size, the position within an attraction). The track information is used to track a visitor within the different subsequences of a video sequence but may also be used to direct one or more cameras to the corresponding cameras. The track information concerning a particular visitor is linked with the biometric ID of the visitor so that the video (sub)-sequences, comprising the track information of a specific visitor can be labelled with the corresponding biometric ID of that visitor. In certain applications, particular track information may not be needed because the camera is in a fixed relative position to the visitor e.g. when the camera is installed on the carriage wherein the visitor is sitting. In these applications, the video sequence can be directly labelled with the biometric ID.

Because, due to different emotions, the face of the visitor is changing, another biometric ID may be derived for the same visitor at a different moment. It may be important to link both (or even more) biometric ID's to the same visitor There may be simple ways to link different biometric ID'S of the same person. It is known that visitors of a roller coaster have a smile on their face when the carriage is going up while their face is expressing fear when the carriage is going down. When the camera is fixed on the carriage and thus directed to the same visitors), deriving a first biometric ID during the going-up phase and a second during the going-down phase will deliver two different biometric ID's of the same visitor and making the link between the two is direct. Using tracking information is also a possibility for linking different ID's of the same visitor

In the assembling station (figure 2), one or more images are made of the visitor and his biometric ID derived. This ID allows retrieving the video (sub-) sequences labelled with the same ID and stored in the storing means. Using the link between ID's r belonging to the same visitor, it is possible to get also the (sub-) sequences which were labelled with another ID of the visitor.

A specific application is now described at the location of a splash ride attraction in an entertainment park. One or more of the cameras (110) are so situated that they are supervising the embarkation station of the attraction and the other cameras (120) are situated along the track of the splash ride. Normally, the cameras (110) at the embarkation station take one or more images of a visitor V and it will be possible to deduct from these images a biometric ID and track information. In case of the splash ride, the track information may be e.g. the position of the visitor V in a boat, e.g. the second row of the third boat. During the boat ride, sub-sequences of images are taken by cameras (110). The track information can be used to control the different cameras (110) so that they follow the visitor V during the attraction. The track information may also be used for labelling temporarily the sub-sequences, taken at the splash ride attraction and containing images of visitor V, with a code, representing the track information. The sub-sequences relating to visitor V are put together in a single sequence relating to the splash ride attraction and are labelled (122) with the biometric ID. The sequence together with the biometric ID is then stored. Information identifying the visited attraction may also be stored on the same video sequence. It is to be noted that the image processing relating with the track information and the labelling with the track information or the biometric ID may be done centrally, in real time or it may be postponed to a more suited moment in time. It is also possible to make an analysis of the images of this video sequence, linked with this attraction. Such an analysis may be directed to the quality of the images, e.g. images showing the visitor looking straightforward in a camera, eliminating images of poor quality or selecting images showing a particular behaviour of the visitor e.g. for detecting the images where visitor V shows strong emotions, fear or cheer, or to any other analysis criteria. On the basis of this analysis, a score (124) may be given to certain images indicating the quality of the image or the suitability of the images to be further processed, etc. This analysis is preferably performed centrally, when a number of video sequences is available.

The same process is repeated at other locations of the attraction park whereby, possibly, another track information is used, e.g. the colour of the jacket of visitor V t the position of visitor V in another attraction,, the height of the visitor, etc.. In this way, a number of sequences of visitor V are obtained. At each location the identification of visitor V is repeated so that all sequences are labelled with an identification corresponding to the first or a furtheridentiftcation (ID) of visitor V. Each of the sequences consists of one or more sub-sequences, obtained on the basis of a corresponding track information.

At the end of the visit of the attraction park, a number of video sequences containing images of visitor V are available. In the neighbourhood of the exit of the attraction park or in any other suitable place, a video assembling system (50Q) is installed. It is to be understood that parts of this video assembling system may also be realized in a central computer system. The video sequences are sent to the video assembling system using well-known techniques, e.g. data transmission. The video assembling system comprises a camera or cameras (510) for making one or more images of visitor V. The system further comprises a circuit for face detection and detection of face features (551) in order to deduct from the image(s) taken by camera 510 again the biometric ID of visitor V. It is also possible to provide at this moment the deduction of more than one biometric ID of the visitor (e.g. the visitor will laugh when he sees a one or more sub-sequences displayed on a screen) so that again a link can be made between different ID's of the same visitor. On the basis of this biometric ID or ID's, the stored sequences, labelled with this identifιcation(s) are retrieved (553)- A selection or ranking of the sequences may be done on the basis of the score given to the different video (sub) - systems.

The video assembling system may also comprise an editing circuit, in which the images of the sequences may be adapted or improved e.g. taking away a shadow on a face or brighten the teeth of a person, applying a sepia filtering, etc.

It is to be noted that in this scheme no initial registration is done and all images, taken at the different locations, may be used in the final assembling, depending on the quality and the content of these images. This is another important advantage over the prior art

An analogous technique of biometric video processing based on facial recognition techniques or on more general attitude, may further be used for the Insertion of virtual images, but now by selecting images containing a particular facial expression of an emotion: scare, surprise, pleasure, or by selecting images showing the visitor throwing his arms in the air, etc.(steps 555 and 556). Finally, the selected images are put together in a single video

If it is the intention to select and add virtual content to the real images, obtained by one or more video capturing and recording systems, this virtual content is to be prepared. A general method for preparing such virtual content is shown in Figure 3 and comprises the following steps:

- in a first step (601 ), images to be used in the virtual content are recorded, designed or otherwise developed (e.g_ a pirate, handling a sword, recording a pirate in different poses);

- in a further step (602), particular information conditions are assigned to the different images of the virtual content or to the different sequences of images of the virtual content La under which conditions a given image of the virtual content will be selected (e.g. the pirate handling a sword will be selected when a face with a scared expression is detected in a real image);

- in a final step, placement rules are assigned to the virtual content images (603) i.e. rules defining how and where the virtual content images are to be placed in real images (e.g. if there is space available at the right or left of a face in that other image, put the pirate at the right, respectively at the left of that face; if a chin is detected, put the tip of the sword of the pirate on the chin of the face).

In order to add virtual content to one or more video sequences, the real images are analyzed by a biometric video processor (555, 701), and images selected containing a particular facial expression or a particular attitude (602), generally called particular information. Examples of such facial expressions and attitudes are: how is the visitor reacting to a particular attraction or to different events (e.g. is he laughing, is he cheering, is he throwing his arms in the air, etc.), how is the visitor reacting to his environment (e.g. is he shaking hands, etc.). On the basis of this analysis virtual content may be selected and added with the real images (556). Figure 4 illustrates an exemplary method how such a selection and such an addition can be done. The method may contain the following steps:

- biometric video processing or analyzing the video sequence for particular information in the images of the visitor V (555, 701) e.g. is the visitor laughing, is he crying, is he throwing his arms in the air, etc.;

- selecting virtual content (702) on the basis of the detected particular information and the corresponding conditions, assigned during th© preparation of the virtual content ;

- add and place the virtual content in the video sequence (703) on the basis of the placement rules, assigned to the virtual content images.

The video may then be made available under any suitable form, e.g. a DVD (557) or may be sent by internet to an email address.

The added virtual content may relate to certain themes of the park or to the visited attractions. In the case of the splash ride, the virtual image may represent e.g. Sinbad the Sailor, sitting or staying on the boat behind visitor V. For other attractions, other virtual personages may inserted m the video sequence e.g. Cinderella, Mickey, etc. The system may comprise also the necessary software so that these virtual personages can react to different attitudes or actions of the visitor V in the video sequence: e.g. they may laugh when the visitor is laughing, throw their arms in the air when the visitor is throwing his arms in the air, start a feigned attack when the face of the visitor expresses fear, walk with the visitor when he starts walking, etc. After being processed in this way, the different video sequences are mounted into a single video, whereby this video may also contain further general video fragments of the whole park or detailed video fragments of one or more attractions. In this way a personalized video is obtained containing images of the visitor on one or more attractions and interacting with one or more virtual personages or other living beings like animals.

Another example of application of the present invention can be found in a video registration of the attendance at a particular party. Images of a particular individual may be taken at different locations of the party in order to get a number of sequences, which may then be combined into a video. The video sequences of the real party may be mixed with virtual images whereby the effect can be obtained of participants of the party r discussing with a virtual personage like Laurel or Hardy, whereby the attitude and/or action of the virtual personage is influenced by the attitude and/or action of the individual or of one or more of the participants of the party.

A further example of application of the present invention is in the area of active sport, e.g. diving. Diving schools often provide the divers with a video of their diving experience. Such a video can be processed in order to include in the images Neptune, the god of the sea, with his trident with as result a virtual discussion between the diver and Neptune- Or a virtual meeting between the diver and one or more mermaids can be "arranged. 8 Also in this application the diver may influence, often without realizing it, the kind of virtual image inserted in the video (Neptune or a mermaid) and the behaviour or reaction of the person in the virtual image.

In the examples given above, virtual content is added to images of human beings. However, the same process of adding virtual content to a video sequence may also be used for adding virtual content to real images of an animal and, very generally, the present invention covers also the adding of virtual content to real images of objects. In the case of objects, particular information may be constituted by the way the object is seen in the image (e.g. when the image shows a top view of a table, a virtual tap-dancer may be added for dancing on the table) or by the movement of the object (e.g. Cinderella in a swing or in a carousel).

It is to be understood that although preferred embodiments, specific constructions and configurations, have been discussed herein for devices applying to the present invention, various changes or modifications in form and detail may be made without departing from the scope and spirit of this invention. Depending on the application, the order of the different steps may be changed and certain steps may be omitted.

It is also to be understood that each time the term "circuit" is used, this does not mean that the invention is limited to embodiments, in which video processing is done by electronic circuitry. As is well known by the skilled person in the art, video processing can be performed by one or more circuits but also by a computer, running the necessary software and such embodiments are also part of the present invention, ft is thus also to be understood that by the word "means", when used in the description and in the claims, is meant one or more circuits or a computer or processor, driven by the necessary software.