Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AN INTERACTIVE ADAPTIVE MEDIA SYSTEM AND A METHOD FOR ENHANCING INDIVIDUALIZED MEDIA EXPERIENCES
Document Type and Number:
WIPO Patent Application WO/2023/169640
Kind Code:
A1
Abstract:
The present invention relates to an interactive adaptive media system and method for interacting subconsciously with a media system (1) for passive entertainment. It is an object of the invention to provide an interactive adaptive media system and method for individualized media experiences that adapts media content to a users psychophysical arousal level, based on real-time biometrical data information. The present invention addresses this by providing a computer implemented method for interacting with a me- dia system, wherein the media system comprises at least one media device for transmit- ting information and at least one sensor for retrieving biometric information, when plac- ing said at least one sensor on or at a user, such that the biometric information retrieved is at least one biometric information related to the user, and generating a biometric base- line profile from said baseline information, and storing said biometric baseline profile in a profile storage medium, and transmitting at least one frame sequence from said media device, such that the frame sequence transmitted is received by said user, and measuring a current biometric information related to the user using the sensor, such that the current biometric information measured is related to said frame sequence received by the user, and comparing and/or analysing said current biometric information relative to said baseline information in the biometric baseline profile using an analysis unit hav- ing analysing means, generating an updated baseline information in biometric baseline profile, such that said updated baseline information is related to the current biometric information, wherein said updated baseline information is stored in said biometric base- line profile in said profile storage medium, and generating a further frame sequence based on said updated baseline information in said biometric baseline profile, wherein said further frame sequence is transmitted from the media device to the user.

Inventors:
GODSK ROZALIA (DK)
GODSK ANDERS (DK)
Application Number:
PCT/DK2023/050038
Publication Date:
September 14, 2023
Filing Date:
March 07, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
RG 2022 HOLDING APS (DK)
International Classes:
H04N21/442; H04N21/422; H04N21/8541
Foreign References:
US20080161109A12008-07-03
US20170188876A12017-07-06
US20160381415A12016-12-29
US7386784B22008-06-10
US9749590B22017-08-29
US20160077547A12016-03-17
Attorney, Agent or Firm:
AWA DENMARK A/S (DK)
Download PDF:
Claims:
CLAIMS

1. Computer implemented method for interacting subconsciously with a media system (1) for passive entertainment, wherein the media system (1) comprises one media device for transmitting media content information, and at least one sensor for retrieving biometric information, wherein the method comprising the following acts: a) placing said at least one sensor (5) on or at a user (4), such that the biometric information (Z) retrieved is the biometric information (Z) related to the user’s (4) subconsciously reactions, b) generating a biometric baseline profile (c,d,e) from a baseline information, and storing said biometric baseline profile (c,d,e) in a profile storage medium (6), c) transmitting at least one frame sequence (X) from said media device (3), such that the frame sequence (X) transmitted is received by said user (4), d) measuring a current biometric information related to the user (4), such that the current biometric information (Z) measured is related to said frame sequence received by the user (4), e) comparing and/or analysing said current biometric information (Z) relative to said baseline information in the biometric baseline profile (c,d,e) using an analysis unit having analysing means, f) generating an updated baseline information in biometric baseline profile, such that said updated baseline information (d,e) is related to the current biometric information (Z), wherein said updated baseline information (d,e) is stored in said biometric baseline profile in said profile storage medium (6), g) generating a further frame sequence (X) based on said updated baseline information in said biometric baseline profile, wherein said further frame sequence (X) is transmitted from the media device (3) to the user (4), such that the user subconscious interacts passively with the interactive media system in a real-life situation in a real-life environment for enhancing user stimuli during entertainment.

2. Computer implemented method according to claim 1, wherein further acts of repeating d) to g).

3. Computer implemented method according to claim 1 or 2, wherein said frame sequence (X) comprises at least one visual sub sequence and at least one audio sub sequences, wherein the method comprising further acts of:

- selecting said visual sub sequence and/or said audio sub sequences based on said biometric information (Z) retrieved from the sensor (5) and/or said baseline information,

- generating said frame sequence (X) based on said visual sub sequence and/or said audio sub sequences.

4. Computer implemented method according to claim 1, 2 or 3, wherein the method comprising further acts of:

- identifying at least one response curve in the biometric information (Z) using said analysing means.

5. Computer implemented method according to claim 4, wherein the method comprising further acts of:

- analysing said response curve relative to a pulse height value and a pulse wide value, such that the response curve identifies if the biometric information (Z) related to the user indicates an under stimulated reaction, an over stimulated reaction or a neutral reaction of the user (4).

6. Computer implemented method according to claim 4, or 5, wherein the method comprising further acts of:

- analysing the response curve relative to a pre-peak, a pre-post, a post-peak, a peak height and/or a peak average to identify the biometric information (Z) related to the user (4),

- identifying an under stimulated reaction, an over stimulated reaction or a neutral reaction of the user (4).

7. Computer implemented method according to claim 5, wherein the response curve and/or the stimulated reaction are related to a subconscious interaction with the media system for passive entertainment.

8. A media system for passive entertainment, wherein the media system is configured to carry out the method according to claim 1-6, wherein the media system (1) comprises: - one media device (3) configured to transmit at least one consecutive frame sequence (X),

- at least one sensor (5) configured to retrieve biometric information (Z),

- at least one profile storage medium (6),

- an analysis unit (7) having analysing means,

- a processing unit having processing means configured to generate a biometric baseline profile from at least one baseline information), wherein the biometric baseline profile is stored in said profile storage medium (6), such that a user subconscious interacts passively with the interactive media system in a real-life environment for enhancing user stimuli (Y) during entertainment.

9. Media system according to claim 8, wherein a frame generating device (2) is configured to generate said frame sequences (X) comprising at least one visual sub sequence and at least one audio sub sequences, such that said frame sequence (X) is based on a biometric baseline profile of the user (4), wherein said frame sequence (X) is transmitted from said media device (3), wherein said media device (3) comprises a visual transmitter unit and/or audio transmitter unit.

10. Media system according to claim 9, wherein the visual transmitter unit is arranged in a distance more than 30 cm from the sensor (5).

11. Media system according to claim 9 or 10, wherein the media device (3) is a handheld device.

12. Media system according to any one of the claims 8-11, wherein the media device (3) comprises one screen having a plane surface configured to transmit two-dimensional image.

13. Media system according to any one of the claims 8-12, wherein the media system (1) is a cinema system.

14. A media program comprising instructions which, when the media program is executed by a media system according to claim 8 - 13, cause a computer unit to carry out the acts of the method of claim 1-7. 15. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the acts of the method of claim 1- 7.

Description:
Title: An interactive adaptive media system and a method for enhancing individualized media experiences.

Field of the Invention

The present invention relates to an interactive adaptive media system and method.

Background of the Invention

There are two kinds of existing technologies that are in the field of the present invention. One of them is the universal media player and the other one is the interactive system used in cinema. U.S. Pat. No. 7,386,784 B2 describes a universal media player as software residents on computers that interpret incoming stream media and convert the media into audio and video form for outputting to a user.

The other one, U.S. Pat. No. 9,749,590 B2 describes a method and system for providing interactive cinema, by collecting digital data from members of an audience in response to content projected in a cinema, processing the data, interfacing the data using a digital content interface with a digital content server, and providing interactive content based on the data.

Besides the two US patents, there is a Danish master thesis (Interactive Digital Film, 2021) describing theoretical possibilities for subliminal interaction in digital film. The author describes a media player that can change between two film clips based on measured GSR number values. The system is only theoretical and proposes a potential developmental direction. In essence, neither the background nor the way how to implement adjustments are explained. The author only suggests the use of GSR number values in relation to an undescribed, hypothetical software function. He does not explain how this hypothetical function would work. At the same time, the thesis addresses various issues and errors related to the use of psychophysical measurements to create subliminal interactions. The thesis concludes that there is need for a different solution to finding out how to adjust media content.

There are a few examples of research concerning the use of GSR in the context of evaluating user experiences of media productions. However, this research concerns experiments where biometric data was used to make analysis of an already existing media production, in other words an analysis of how users experience a media production which is already made in its full length and data is not used for the purpose of adjusting media content. Furthermore, these experiences made use of a baseline established on external stimuli, for instance a videoclip with nature, where this data was used as baseline for the purpose of researching more than one movie commercial. The results are though problematic because of the way baseline was established combined with the nonexistence of direct user profiles. Additionally, these experiments are neither made on individual respondents nor on specific media productions, but instead it involved more respondents at once who saw a series of productions one after the other - so they could later on remember parts of the media content, across media productions and fill out questionnaires. The essence of the problem is that there is no solution for how to adjust stimuli based on biometric data. One cannot use number values as they are different from person to person.

US 2016/0077547 Al describe a system and method for enhanced training purpose using bio-signal in a virtual reality environment. A training apparatus has an input device and a wearable computing device with a bio-signal sensor and a display to provide an interactive virtual reality, VR, environment for a user. The VR is a computer simulation of physical elements for training purpose. The bio-signal sensor receives bio-signal data from the user. The user interacts with content that is presented in the VR environment. The user interactions and bio-signal data are scored with a user state score and a performance scored. Feedback is given to the user based on the scores in furtherance of training. The feedback may update the VR environment and may trigger additional VR events to continue training. The user must consciously and actively be a part of the interactions in the VR environment to provide a score. This do not provide a relaxed situation for the user in a real-life environment. The user is mentally transferred to a virtual world for training purpose, when putting on the training apparatus with a display to provide the illusion of a VR environment.

Object of the Invention

Consequently, it is an object of the invention to provide an interactive adaptive media system and method for individualized media experiences that adapts media content to a user’s psychophysical arousal level in a real-life situation in a real-life environment while the user is not in an active mode, but physically relaxed. Description of the Invention

The present invention addresses this by providing a computer implemented method for interacting subconsciously with a media system for passive entertainment, wherein the media system comprises one media device for transmitting media content information, and at least one sensor for retrieving biometric information, wherein the method comprising the following acts: a) placing said at least one sensor on or at a user, such that the biometric information retrieved is at least one biometric information related to the user’s subconsciously biometric reactions, b) generating a biometric baseline profile from said baseline information, and storing said biometric baseline profile in a profile storage medium, c) transmitting at least one frame sequence from said media device, such that the frame sequence transmitted is received by said user, d) measuring a current biometric information related to the user, such that the current biometric information measured is related to said frame sequence received by the user, e) comparing and/or analysing said current biometric information relative to said baseline information in the biometric baseline profile using an analysis unit having analysing means, f) generating an updated baseline information in biometric baseline profile, such that said updated baseline information is related to the current biometric information, wherein said updated baseline information is stored in said biometric baseline profile in said profile storage medium, g) generating a further frame sequence based on said updated baseline information in said biometric baseline profile, wherein said further frame sequence is transmitted from the media device to the user, such that the user subconscious interacts passively with the interactive media system in a real-life situation in a real-life environment for enhancing user stimuli during entertainment.

The present invention is an interactive adaptive media system, wherein a user subconsciously interacts with the interactive adaptive media system in an everyday situation in a real-life environment. The interactive adaptive media system may include a media device, such as a media player. The media system may be a passive entertainment system for watching a movie or similar. A movie comprises a plurality of consecutive frame sequences, wherein one of the frame sequences is a starting frame sequence and another frame sequence is the end frame sequence. The frame sequences are sent one by one to the media device. The media device is transmitting information, such as picture and/or sound. The media system can play tailored content based on a user of the media system, psychophysical reactions, such as for example subconsciously cognitive and emotional reactions. These subconscious reactions may be measured using biometric sensors. The interactive element of the adaptive media system has to be understood as autonomous, which means that the user does not make conscious decisions of choosing A or B options from the media content, neither does he/she have to physically engage in the decision making, by having to click on keyboards or touchpads, but a process that happens automatically. This is how the present invention is different from the interactive system used in cinema, e.g., a home cinema, where user has to interact actively with the media content. The user or user’s may be passively entertained in real- life situation, such that the user interacts passively with the interactive media system in real-life environment. The user or user’s may during entertainment have a substantially passive mindset.

The term “passive entertainment” shall be understood as the user in a daily situation does not involve participation and instead places the user on autopilot. The user is a spectator in a situation, as a passive audience. Such when streaming movies from streaming services, or listening to music from streaming services, browsing on social media, and even reading books are all passive entertainment because of their lack of interaction. The user does not consciously seek to change the situation in any way. The user may for example just stare at an image device such as a screen and relax and expected to be entertained passively. The opposite of passive entertainment is active entertainment, which involves the user's mental-physical participation. Thus, activities such as playing, singing, and training require participation.

The term biometrics shall be understood as the biometrics of a user which is the measurement of physiological characteristics like - but not limited to - fingerprint, iris patterns, or facial features that can be used to identify an individual. The biometric information may be processed as real-time measure of physiological reactions to an environmental stimulus in a real-life situation. The environmental stimuli may for example be provided by the real-time interactive adaptive media system.

The way the real-time interactive adaptive media system is different from the universal media player is that

- media system can adjust media content based on the user’s psychophysical reactions, and

- media system can play adjusted media content.

The universal media player may for example play images and audio based on a video file and an image sequence. The adaptive media player is based on an array of tracks with multiple separated video and audio sequences and/or subsequence. Stimuli can be activated by controlled selection of audio-visual stimuli based on sensor registrations.

Compared to a universal media player, the adaptive media player can allow the user to have individualized and tailored media experiences in a rea-life environment that are based on their own preferences primary based on subliminal reactions they have to the exposed media content.

The media system comprises at least one media device, which may be a media player, for transmitting information. The media system furthermore comprises at least one sensor for retrieving biometric information. The method comprising acts of retrieving, transmitting and processing data information. The media system comprises means configured to place said at least one sensor on or at a user, such that the biometric information retrieved is at least one biometric information related to the user. The biometric information may be real-time biometrical data information. The media system may be a real-time interactive adaptive media system processing the method in real-time.

The media system comprises means configured to generate a biometric baseline profile from said baseline information and storing said biometric baseline profile in a profile storage medium. The media system comprises means configured to transmit at least one frame sequence from said media device, such that the frame sequence transmitted is received by said user. The media system comprises means configured to analyse, measure, and determine current biometric information related to the user using the sensor, such that the current biometric information measured is related to said frame sequence received by the user.

The media system comprises means configured to compare and/or analyse said current biometric information relative to said baseline information in the biometric baseline profile using an analysis unit having analysing means.

The media system comprises means configured to generate an updated baseline information in biometric baseline profile, such that said updated baseline information is related to the current biometric information, wherein said updated baseline information is stored in said biometric baseline profile in said profile storage medium.

The media system comprises means configured to generate a further frame sequence based on said updated baseline information in said biometric baseline profile, wherein said further frame sequence is transmitted from the media device to the user.

The media system comprises means configured to repeat the method. A current biometric information related to the user using the sensor measured. The current biometric information is compared and/or analysed relative to said baseline information in the biometric baseline profile. An updated baseline information in biometric baseline profile is generated. A further frame sequence based on said updated baseline information in said biometric baseline profile is generated. The further frame sequence is then transmitted from the media device to the user. These acts may be repeated a plurality of time. This will for example form a real time interactive movie sequence.

This present invention presents a solution, which provides a unique and individual baseline wherein the user is in a real-world environment. The interactive media system is capable of planning stimuli originating out of an assumption of the user’s subconsciously reaction, and registration and learning of individual reactions to media stimuli. The stimuli level may be affected relative to at least one genre category and/or at least one subgenre category, such that the subsequently frame sequence to be transmitted is selected based on the user’s psychophysical reaction relative to the genre and/or subgenre contents of the frame sequence. With the purpose of evaluating user’s experience, it is essential to begin with the establishment of relevant baseline, where each user has individual measured reactions based on different types of stimuli, which is also varying when it comes to context and genre. It is therefore necessary to know how a specific user normally reacts in different situations - and take into account that the baseline can change in view of the stimuli type. This applies, for instance based on genre, where the experience of scary content and love content can vary a lot for the same user - and therefore the necessity of establishing various baselines, for instance a baseline for scary content and one for love content, as well as updating and changing between baselines during the media playing.

Consequently, the present invention provides an interactive adaptive media system and method for individualized media experiences that adapts media content to a user’s psychophysical arousal level, based on real-time biometrical data information.

In an advantageous method of the invention, the invention said frame sequence comprises at least one visual sub sequence and at least one audio sub sequences, wherein the method comprising further acts of:

- selecting said visual sub sequence and/or said audio sub sequences based on said biometric information retrieved from the sensor and/or said baseline information,

- generating said frame sequence based on said visual sub sequence and/or said audio sub sequences.

The audio and visual subsequence can be adjusted and replaced independently of each other, which gives more flexibility and possibility for graduating media content at different scales.

The frame sequence may comprise at least one visual sub sequence and at least one audio sub sequences in each frame sequence. The method comprises acts of selecting said visual sub sequence and/or said audio sub sequences based on said biometric information retrieved from the sensor and/or said baseline information. The method comprise further acts of generating said frame sequence based on said visual sub sequence and/or said audio sub sequences. A media device may comprise both a sound and a visual component. The media device may transmit, e.g. playing, slide-tape presentations, films, television programs, corporate conferencing, church services, and live theatre productions etc.

The advantage of making individualized and adaptive user experiences is that all users and all user experiences are based on a cognitive and emotional analysis of registered stimuli which is connected to both the biological mechanisms and social learning - in other words some of the media content would have a general impact.

Fangs and chock would, for instance - create more even experiences across users with different cultural backgrounds while social based jokes need a specific background understanding in order to have an optimal impact. While all users have different backgrounds and their biological sense apparatuses have different degree of sensitivity to audio-visual stimuli, it is necessary to be able to address each user individually and tailor media content with the user in the center - to create an even immersive experience for a larger audience.

For the same reason it is necessary that the interactive adaptive media player is as flexible as possible and that images and sound can be adjusted independently, as well as the need for diversified adjustments and adding possibilities, for instance adjustment layers and augmented layers in addition to the traditional sound and image clip/sequences.

In a further advantageous method of the invention the method comprising further acts of

- identifying at least one response curve in the biometric information using said analysing means.

In a still further advantageous method of the invention, wherein the method comprising further acts of analysing said response curve relative to a pulse height value and a pulse wide value, such that the response curve identifies if the biometric information related to the user indicates an under stimulated reaction, an over stimulated reaction or a neutral reaction of the user.

In a further advantageous method of the invention, where the invention the method comprising further acts of - analysing the response curve relative to a pre-peak, a pre-post, a post-peak, a peak height and/or a peak average to identify the biometric information related to the user, -

- identifying an under stimulated reaction, an over stimulated reaction or a neutral reaction of the user.

The biometric information retrieved, is biometric information related to the user. A baseline information comprises biometric information related to the user, wherein one or more of the biometric information is part of the baseline information. The baseline information may comprise further information regarding the user, which is not retrieved by a sensor. A biometric baseline profile comprises baseline information. The biometric baseline profile may be stored in a profile storage medium.

The term baseline may comprise following segments:

Baseline status and directive check, which is a function that controls status on which baseline is used at the given moment and which baseline the producer defined in his/her directive.

Baseline selection - the baseline (which is the same as the one set in the directive) is being selected.

Initial baseline - is defined in the first sequence of the media production in the ahead timeframe set by the producer.

Update of baseline - function that describes that the baseline has to be updated out of a given sequence during the media playing, which is defined by the producer.

New baseline - a new baseline can be established at any given point during the media playing, for instance if there are different themes (one baseline for all the love themes, and one for all the fight themes).

In a still further advantageous method of the invention, the response curve and/or the stimulated reaction are related to a subconscious interaction with the media system.

The response curve may be related to the user’s subconscious interaction with the media system. The stimulated reaction may be related to the user’s subconscious interaction with the media system for passive entertainment. The stimulated reaction may be related to the user’s passively interaction on conscious level with the media system for passive entertainment purpose. An assessment of stimuli level may relate to a function which chooses if the user is: a) Over stimulated. b) Within planned reaction range. c) Under stimulated.

A track choice, tertiary may relate to the baseline profile. At this point the initial baseline is established on the basis of x number of seconds or frames - determined by the producer at the beginning of the media production. To emphasize how the track choice function works, an example is provided: the planned stimulus is a sequence of images and audio displaying of a human character who has two sacks with living animals inside of it (there is no indication of which animals). At this point user’s reaction is measured by the biometric sensor and compared with the initial baseline in order to determine the change in the baseline with the aim to determine if the user is under stimulated, over- stimulated or within the planned reaction range (y 1 , y 2 and y 3 , see fig. 2). If there is only one alternative stimulus for yl or y3, then the only option the producer chose will be activated. If the user is over stimulated in relation to the increasing the producer aimed for, then the decreased clip chosen by the producer will be activated (for instance the character in the story wakes up from a dream). If the response is y2 and user’s reaction is within planned reaction range, then the media continues playing. If the user is under stimulated, then the clip - which the producer has defined as more stimulating - will be activated (for instance, the sacks with animals inside could catch fire). All measurements and reactions are saved in the database and are used for updating the user profile.

A matching Function, Quaternary, relates to adjustments of stimuli. If there are more alternatives to, for instance increases or decreases of adjustments of the stimuli, then the matching function will be activated. In other words, if there are two or stimuli with the same function, e.g. both for increase of arousal, for instance one could be that the sacks contain either kittens or puppies, then there is need to know what affects the user if kittens or puppies are chosen. In this way, the matching function does a request to the database which contains user’s profile for existence of data on the expected R° for kittens and puppies. If there is existing data on this matter, then the software will choose a stimulus from a predefined wish from the producer. If, for instance there is data on both stimuli, which indicates that kittens will give the highest increase in arousal - and the producer aimed for an increase in the user’s arousal, then kittens will be chosen. If the producer aimed for less arousal, then the sequence with the puppies will be activated as matching function’s aim is to choose the stimulus that gives the user a reaction as close as possible to the one intended by the producer. The reaction is then registered and saved in the database which continuously updates with the newest user reaction based on different stimuli.

Learning for database (Quinary). Continuing with the same example from the two previous sections, there is also the scenario with no registered R° value. In this situation there is no information about the user’s reaction to either kittens or puppies. In this situation the ‘Learning for database’ function is activated, where the analysis unit chooses a random stimulus (if the producer did not predefine a specific order). With the example above, the character opens the sack, and it is full of kittens (the producer has prelabelled stimuli for analysis purposes). In other words, the analysis unit chooses a random stimulus which is played on the audio-visual device. Afterwards, the user’s reaction is measured and analysed to check if the stimulus gave the aimed reaction. At the same time, the database updates the new reaction on how the user reacted to the kitten’s stimulus and established an intended R° on this specific stimulus. If the reaction is within planned reaction range, then the next sequence is played (where the next scene could be playing on the audio-visual device). If the user is over or under stimulated, then alternative stimuli with puppies will be playing (for instance the character in the media opens sack number two which is full of puppies). The new reaction is then registered in the database so there is stored information for next time there will be puppies in a different media content. Additionally, the learning functions also includes an image recognition subfunction which enables automatic identification and categorization and thereby automatic indexing of how the user is expected to react on different stimuli (eg. images of specific dog breed or the sound of a loud noise). In this way, the producer’s manual labelling is minimized.

The database / storage medium, may for example include one or more of following, but is not limited to:

1) Expected response to stimuli - decided by the producer.

2) All stimuli from the adaptive media system.

3) Baselines and baseline directives. 4) User’s profile that are continuously updated by the ‘learning for database’ function.

Each user’s responses are logged in a profile which is part of a database that communicates with the analysis unit and vice versa.

The user profile includes a profile ID and five main attributes: parameters, properties, dimensions, triggers and expected response to stimuli.

I) Parameters - are categorized in:

1. Explicable emotions (for instance, fear, nostalgia, excitement, and suchlike).

2. Unexplained emotions - the two categories of emotions (explicable and unexplained) include respectively two subcategories: a) biological induced and b) social/cultural induced.

3. Conscious cognitions - which are categorized into three properties: a. Conscious memory retrieval - this property could for instance occur when the user is triggered by a specific element in the media content that triggers past event(s) from his/her personal past experiences. b. Conscious situation analysis - an example of conscious situation analysis could be when something odd happens in the story, a form of disconnection from the storyline giving motive to confusion. It can be of spatial or temporal nature. c. Conscious categorization - based on personal media experiences from earlier media exposures, the user could use them categorize the present media content.

4. Unconscious cognitions a. Former categorizations (partially forgotten). b. Subconscious memories - these could be memories that were forgotten and are triggered by specific elements of the exposed present media. c. Repressed memories - it could for instance be hidden memories that come to surface because a specific element in the media has triggered them.

5. Known user opinions a. Political preferences b. Fundamental/key issues c. Cultural ideal

6. ASMR (Autonomous Sensory Meridian Response) a. Sounds - for instance whispering, blowing, water drops, etcetera. b. Physical - ear brushing, hair play, massage, etcetera. c. Situational - certain words, eye contact, role-play, etcetera. d. Visual - paint mixing, light patterns, hand movements, etcetera

7. Biosensor baselines - based on early biometric registrations of user’s reactions, a R° is saved in the profile showing the user’s expected reaction to stimuli in percentage number. Based on the last time the user was exposed to a specific stimulus, then there is an expectation that the next time the user will be exposed to the same stimulus, the measurement of the reaction will be the same. a. GSR b. Pulse c. EEG d. PPG e. Hormonal monitoring f. Eye tracking (gaze point and pupil dilations). g. Blood sugar

II) Properties - define each of the seven parameters.

III) Dimensions - can include variations of high/low or many/few.

IV) Triggers - can be theme, audio, image, or word based.

V) Expected response to stimuli.

To illustrate how the profile is set up, two examples are outlined, first example is of explicable emotions as parameter and the property of fear, and it is found on a scale from high to low when it comes to its dimension. The ‘fear’ is triggered by the appearance of a ghost in the exposed media and the producer has planned that the expected response to stimuli should be Ro = 10% from the baseline. It is important to mention that this specific property is dependable on the user as user reaction to the stimulus can be socially or culturally determined. For some users living in specific parts of the world, a ghost is considered scary and triggers fear, whereas in other parts of the world a ghost can be a friendly reminder of someone dear.

Second example illustrates is the parameter of conscious cognitions with the property of conscious situation analysis. Dimensions are many/few which refers to how many times the user experiences this property during the exposure of media. The trigger could be a specific character which starts a cognition process where the user tries to analyze the situation and tries to place the character in a different location where he/she has seen it. The expected response to stimuli is set to Ro = 5% from the baseline, as the producer planned that the user should not have any strong reactions at this specific point.

The present invention addresses this by providing a media system for carrying out the method, wherein the media system comprises:

- at least one media device configured to transmit at least one frame sequence,

- at least one sensor configured to retrieve biometric information,

- at least one profile storage medium,

- an analysis unit having analysing means,

- a processing unit having processing means configured to generate a biometric baseline profile from at least one baseline information related to the user, wherein the biometric baseline profile is stored in said profile storage medium.

The first sequence from the main video track is activated in the media system, such as an adaptive media player. Audio-visual device: the first sequence is played on an audiovisual device. Biosensor registration of user’s reaction. A registration is being made of the user’s reaction to the played sequence and the profile is updated.

Baseline may comprise baseline status and directive check. The media system may perform a check of user’s baseline status and producer’s directive. The media system may provide an initial baseline establishment: the commencing of an initial baseline. The Adaptive media system continues playing the visual sequence, MV, and related audio sequence MA, in an interrelated track as a common frame sequence. The frame sequence may be shown on an audio-visual device. The biosensor registration of user reaction: a new registration based on the new stimulus is done and the profile is updated. The media system may again perform a check of user’s baseline status and producer’s directive. There may be an already existing initial baseline being used. The media system may be in data communication with a storage medium such as a database. There may furthermore be a data communication between the database and analysis unit. The request and reply of which the aimed stimuli should be, according to user’s baseline status and producer’s directive.

Assessment of stimuli level is based on the user reaction which is measured in relation to the baseline and the aimed reaction defined by the producer, it is now possible to determine if the user is under stimulated, within planned reaction range or over stimulated. The stimuli level is related to the user’s passively subconscious interaction with the interactive media system in a real-life situation in a real-life environment for enhancing user stimuli during entertainment wherein the user is passively engaged.

Assessment of stimuli level is within planned reaction range: if the reaction is within planned reaction range, then the playing of the media continues as planned. Assessment of stimuli level may be to assess if the user is over and under stimulated and if over or under stimulated, then the adaptive media player is checked for planned alternative stimuli. Track choice function is hereby activated. Assessment of stimuli level may be to assess if the user is over and under stimulated and if there is only one stimulus planned to accordingly under and over stimulated, then the one stimulus will be activated, and the media playing will continue as planned.

A matching function may be activated if the adaptive frame sequence contains more stimuli to downward and upward adjustments, then a matching will be made. The matching is between the possible stimuli choice and what is registered in user’s profile regarding the expected reaction on the various types of stimuli, wherein the reaction measured based on the R°. The new reaction is saved in the database.

Learning for database may be performed in the case that there are more alternative stimuli in the adaptive media player tracks and there is no existent data in the user profile about the expected reaction to stimuli, the Teaming for database’ function will be activated, and a random stimulus will be activated, and the change will be saved. After the media playing on an audiovisual device, a new sensor measurement is made as well as the updating of the database, now there will be an expected reaction registered for the next time a choice will be made. At the same time, for the next choice it is possible to define if there is a wish to play the stimulus where there is no expected reaction for updating the database or choose the secure way or the perhaps secure way and choose the stimuli where there is an expected reaction registered in the database.

In an advantageous embodiment of the invention a frame generating device is configured to generate said frame sequences comprising at least one visual sub sequence and at least one audio sub sequences, such that said frame sequence is based on a biometric baseline profile of the user, wherein said frame sequence is transmitted from said media device comprising a visual transmitter unit and/or audio transmitter unit.

In a further advantageous embodiment of the invention the visual transmitter unit is arranged in a distance more than 30 cm from the sensor. The visual transmitter unit may be arranged in a distance more than 30 cm from the sensor. The visual transmitter unit may be arranged in a distance more than 30 cm from the nearest sensor. Preferably the visual transmitter unit may be arranged in a distance more than 1 meter from the nearest sensor. More preferably, the visual transmitter unit may be arranged in a distance more than 2 meter from the nearest sensor. Alternatively, all the sensors are arranged in a distance more than 5 meters from the visual transmitter unit. The visual transmitter unit is not attached or arranged directly onto the user. The visual transmitter unit is arranged away from the user. The sensors are arranged relatively close to the user. The visual transmitter unit is arranged away from the sensors.

To provide the user a subconscious interaction with the interactive media system, the interactive media system is configured to observe the user while transmitting a media content to the user. The user may wear a finger, hand, arm and/or leg wearable device comprising at least one sensor. The user may wear a headset device or ear plug device comprising sensor or sensors. The arm and/or leg wearable device comprises at least one biometric sensor. The psychophysical reaction may be measured by the biometrical measuring device. The biometrical measuring device may comprise one or more sensors capable of measuring at least one psychophysical reaction when the biometric measuring device is attached to a user’s body, arm and/or leg etc. The biometric data information may be used for analysing emotional reactions, such as anger, fear, happiness, sadness, anxiety, surprise and pleasantness or disgust etc. The interactive media system is configured to retrieve biometric data information from at least one biometric measurement device. The biometric data information may comprise information related to the user’s heartbeat, perspiration, respiration eye tracking, pupil dilation and facial expression etc. The biometric measurement device may be arranged on or relative to a user, such that the at least one biometric data information retrieved is related to at least one user’s biometric reaction, when the user is in a physics relaxed position.

In a further advantageous embodiment of the invention the media device is a handheld device. The interactive media device may be a handheld device, such as a smart phone, a laptop a portable TV, a tablet etc.

In a still further advantageous embodiment of the invention the media device comprises one screen having a plane surface configured to transmit two-dimensional image. A two-dimensional image may be showed on a substantially flat screen having a substantially plane surface. Consecutive images may be transmitted as two-dimensional image, but may be received as two-dimensional image, or as an alternative three-dimensional image.

In a further advantageous embodiment of the invention the media system is a cinema system. The media system may be a cinema system, such as a larger public cinema system or a home cinema system.

The user passively interacts with the media system, without take an active participation during the evet, after the user has started the event. The user has a non-physical approach for interacting with the system during the event. The user dos therefor not move during the event, instead the user interact with the system in an inactive participation. For example, in a dormant state of mind by relaxing in a couch or chair or similar and watching a movie. The screen is not fastened to the user. A media program comprising instructions which, when the program is executed by a media system cause a computer unit to carry out the acts of the method. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the acts of the method.

Adjustment layers are characteristic by the fact that they do not contain audio-visual stimuli, but rather adjustment instructions to video and audio stimuli in the interactive adaptive media system’s tracks. It is for instance possible to put an adjustment layer on top of a video track and predetermine that the activation of the adjustment layer could for instance change the lighting in a video clip, change the volume on a audio track or add an audio or visual effect.

The invention has now been explained with reference to a few embodiments which have only been discussed in order to illustrate the many possibilities and varying design possibilities achievable with the interactive adaptive media system and a method for enhancing individualized media experiences according to the present invention.

Description of the Drawing

The invention will now be explained with reference to the accompanying drawings wherein:

Fig. la,b: one embodiment of an interactive adaptive media system.

Fig. 2: A adaptive media system comprising frame sequence divided into sub sequences.

Fig. 3a, b: Illustrating a baseline estimation.

Fig. 4: A baseline normalized biosensor data response.

Fig 5: Showing an example of producer input with x planned stimuli.

Fig. 6a, b: Illustrating examples of measured pulse response.

Detailed Description of the Invention

An embodiment and a method of the invention is explained in the following detailed description. It is to be understood that the invention is not limited in its scope to the following description or illustrated in the drawings. The invention is capable of other embodiments and of being practiced or carried out in various ways.

Fig. 1 illustrates an interactive adaptive media system 1. A user 4 is exposed to audiovisual stimuli from the audio-visual media device 3. Biometric sensor 5 measuring the user’s 4 reaction to the exposed media content, a transmitted frame sequence X. The interactive adaptive media system 1 may comprise an analysis unit 7 comprising analyzing means and a storage medium 6, e.g. a database. A frame generating device 2 is configured to generate said frame sequences X of a plurality of sub sequences 2 1 ,2 2 ,....2 n . Said frame sequences X comprises at least one visual sub sequence and at least one audio sub sequences. The frame sequence X is based on a biometric baseline profile of the user 4. The frame sequence X is transmitted from said media device 3, such as a screen and a loudspeaker.

Essentially, the media system 1 works as following, there is an adaptive media player that plays media content to the user 4. The user 4 has a psychophysical reaction to the media content, the transmitted frame sequence X, that is measured by the biometrical sensors 5. The media system 1 sends data information to the analysis unit, which graduates the next media content that the adaptive media player exposes to the user based on psychophysical reaction the user just had. In other words, the system is sensor data based, autonomous and interactive that can choose one or more specific predefined alternative stimuli from the media content that is selected and thereby giving a tailored experience to the user 4. This happens based on user’s sensor 5 measurements and it can increase or decrease the arousal level through the choice of stimuli Y. The biometric information Z may be analysed to determine the reaction of the user 4 from the stimuli Y chosen. For example, by measuring a response curve relative to a pulse height value and a pulse wide value, the media system 1 identifies if the biometric information indicates an over stimulated reaction Y 1 or a neutral reaction Y 2 or an under stimulated reaction Y 3 of the user 4.

To emphasize how the track choice function works, an example is provided: the planned stimulus is a frame sequence of images and audio displaying of a human character who has two sacks with living animals inside of it. There is no indication of which animals. At this point user’s 4 reaction is measured by the biometric sensor and compared with the initial baseline in order to determine the change in the baseline with the aim to determine if the user is under stimulated, overstimulated or within the planned reaction range y 1 , y 2 and y 3 . If there is only one alternative stimulus for y 1 or y 3 , then the only option the producer chose will be activated. If the user 4 is over stimulated in relation to the increasing the producer aimed for, then the decreased clip chosen by the producer will be activated. For instance, the character in the story wakes up from a dream. If the response is y 2 and user’s 4 reaction is within planned reaction range, then the media continues playing. If the user 4 is under stimulated, then the clip - which the producer has defined as more stimulating - will be activated. For instance, the sacks with animals inside could catch fire. All measurements and reactions are saved in the database and are used for updating the user profile.

The adaptive media system comprising frame sequence divided into sub sequences, which is illustrated in fig. 2. The way this media player works is as follows: the media content is constructed around two main video and audio tracks, MV and MA. A track may be a frame sequence or a plurality of frame sequences.

The main video track, MV, contains video clips, VC, and the main audio track contains audio clips, AC. During the exposure of the media content, there may be alternative clips to the main video clips that get activated, for instance VC5 B instead of VC5 A. The media content path would in this case be: VC1-VC2-VC3-VC4-VC5 B-VC6-VC7. The same happens when it comes to the audio clips, AC3 B getting activated instead of AC3, and the media content path could be: AC1-AC2-AC3 B-AC4-AC5 A-AC6-AC7. The additional audio and video tracks contain stimuli that can be activated based on the analysis unit, in the adaptive media system, at the given point. While figure 1 shows one main video and one main audio track as well as one secondary video and one secondary audio track, it is possible to have as many additional video and audio tracks as necessary. The same principle works for the augmented visual and audio tracks, AV, AS and SA, where there can be as many as necessary. The augmented visual tracks can work across the tracks, for instance augmented visual track 1, AVI, and augmented visual track 2, AV2, in different scenarios: a) two of the stimuli could be activated at the same time, AV2 A and AV2 B; b) both stimuli could be deactivated; c) stimulus AV3 A could be activated while AV3 B could be deactivated; d) stimulus AV4 B could be activated while AV4 A is deactivated. At the same time AVI could have some stimuli while AV2 has no stimuli: a) there is only one stimulus on track AV 1 where there is no stimulus on AV 2 or b) there is a stimulus on AV2 while there is no stimulus on AVI. While the stimuli on the additional video tracks are all video clips, the stimuli from the augmented tracks are different in nature. They could be augmented CGI, computer- generated imagery sequence, AV2 A, AV4 A, AV4 B, augmented visual effects sequence, AVI, AV2 B or augmented images, AV3 A, AV3 B.

The unique feature of the augmented video tracks, which differentiates them from existing media player technology, is their ability to change or manipulate traditional video tracks. Their function is to either add or alter content from MV, 2ndV etcetera. For instance, there could be image sequences with transparent background functioning as an overlay. Another way the augmented layer can be used is as an adjustment layer - which can for instance be used to adjust colors, exposure levels, contrasts, etcetera on the MV track and additional video tracks.

Regarding the augmented audio tracks, the track 1, SAI, has audio clips with sound volume adjustments, SAI, SA2, whereas the augmented audio track 2 has additional sound clips, AS1, AS2. Like the augmented visual tracks, the augmented audio tracks can also both be activated or deactivated and combined across tracks.

The unique feature of the augmented audio tracks is that they can add a single sound to the main audio tracks, MA, 2ndA, etcetera, which can be either preplanned to the specific media content or retrieved from an audio library connected to the user profile. The augmented audio track can also be used as an adjustment layer to adjust sound volume on individual audio tracks, distort and add other audio effects to the main audio tracks.

An adaptive media system 1 starts with the exposure of media content on a media device 3. The media device 3 can be anything from a smart TV, smartphone, cinema screen, tablet to PC monitor that plays audio-visual stimuli. At its core, the system is based on a starting point which is the planned stimuli that is incorporates three elements:

1) biosensor data registration, for instance GSR, pulse, hormonal monitoring,

2) user profiling, and

3) baseline(s).

The baseline is constant and dynamic, which means that there can be an initial baseline that is established at the beginning of the media production. At the same time the producer can set predefined baseline directives for the establishment of new baselines at different points in the media content, see fig. 3a. The use of a constant and dynamic baseline is the core of the present invention. This process results into y, which is the analysis of the current stimulus reaction based on the alteration of user’s baseline. This is used to determine one of three possible reactions:

1) over stimulated (y 1 ),

2) under stimulated (y 2 ), or

3) the x stimuli worked as planned from the beginning and y is the aimed reaction (y 3 ).

The outcome of y is registered in the database and enlarges right away system’s knowledge about the user and user expected reaction y, which constitutes the foundation for the next stimulus that is shown to z. In this way analysis of user’s biosensor measurements - in relation to divergence from baseline - is constantly updated during the media experience. This constitutes therefore the foundation for constantly analyzing user’s range of reaction and assessing suitable stimuli for ensuring the aimed and ahead planned media experience.

Fig. 3a, b illustrate a baseline estimation. The initial baseline C is determined during the beginning of media playing, which is the time reference is to be determined individually per each media content. The individualized baseline for each user leads the user through the media and enhances user’s experience of the entire media.

Figure 3a, b shows a graph over a media production and the establishment of user’s initial baseline c at the beginning of it. Besides the initial baseline c, user’s baseline can also be set at different times during the media exposure as an updated baselined d and a new established baseline e, if the producer finds it relevant. The baseline is constant, which means that it can constitute the foundation for analysis, and it is also dynamic - as it can be updated or replaced during the media playing.

Figure 3a, b illustrates various biometric local and global responses. These can be seen as windows of specific timeframes (At). There are two main outcomes from the identification of biometric local and global response. It is possible to identify which type of user response (both local and global), and the whole stimuli sequence consists of small local consequences, which can be set together to the overall user reaction. The producer can plan an unlimited number of baselines at any point in the media content. These can be of any length and stay activated until the next baseline is set up. Figure 3b portrays how local biosensor data registrations can on their own constitute the foundation for local analysis. At the same time, a combination of several local biosensor data registrations can constitute a global foundation for analysis. In this way it is possible to analyse data information, user’s measured reactions to stimuli on different local and global scales. An example of a mathematical formula could look as following: F= k n X x n + k n -i X x n -l + k n -2 X xn' 2 + ..., where the sum of more local registrations can together constitute a global analytical basis.

User’s baseline in fig. 4 is a biosensor-data stimuli response, based on normalized values. Ro is assigned within a determined timeframe or frame rate that can be defined by the producer for each individual media production. Any divergence from Ro can be used to decide the audio-visual stimuli’s effect on the user and Ro is determined from average and/or median values. The two stimuli responses, illustrated with figure 4, portray two stimuli response tests showing user’s biometric data registrations which can vary when based on for instance themes and context - context could for example entail user’s general mood and thereupon stimuli need.

Fig. 5 shows an example of producer input with x planned stimuli. In this regard the x planned stimuli is set at a specific time frame. At, describes a specific time which can be defined for each individual media production, and it can either be elapsed time or frame rates. Each of the variables has to be typed in one of the two boxes. The producer chooses then one aimed biometric response type from a list of various response types. Fig. 5 portrays only seven examples.

Fig. 6a, b visually illustrate examples of measured pulse response. There are various possibilities to define user’s biometric responses which is used to monitor whereas the user gets the intended experience - which decides the activation of stimuli tracks in the adaptive media player. For instance, if ’pulse’ response is chosen, then the producer has to define an aimed peak average which is different from choosing, for instance the ‘spike’ response, which only has one peak data point. In the media production process, the producer chooses a Apost-pre value which depends on which kind of response he/she aims for, as they are different from one response type to another - the ‘pulse’ response, for instance needs an average value, compared with the ‘spike’ response, which does not. Nevertheless, there is one common feature to all response types - all have a pre peak, pre-post and post peak segments. Fig. 6a, b illustrates another way of planning input in a media production, where the producer types in variables of the aimed response, Phaimed, which is in this example set between 15 and 40%.

An example of a planned stimulus is K which can for instance be a shock stimulus with a high a value and a small At value - which means a high increase in a very short time. Another example of planned stimulus could be L, where both a and At are high, which could for instance be a suspense sequence where the producer intends a longer response span (high At). Fig. 6b shows two examples of deconstructed biometric response types, but there can be many more different ones.