Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MONITORING OF AVATAR BEHAVIOR IN EXTENDED REALITY ENVIRONMENTS
Document Type and Number:
WIPO Patent Application WO/2023/186264
Kind Code:
A1
Abstract:
A shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user is controlled. The control includes receiving (109, 201) control input that indicates a behavioral action to be performed by the first avatar in the shared extended reality environment, and detecting (111, 203) that the indicated behavioral action is a first predefined behavioral action, wherein the first predefined behavioral action is one of a plurality of different predefined behavioral actions. A determination (111, 205) is made regarding whether an information display criterion associated with the first predefined behavioral action is satisfied. Explanatory information related to the first predefined behavioral action is presented (119, 207) on a display device associated with the first user when the information display criterion is satisfied, wherein the information display criterion comprises one or more of: an environmental criterion based on one or more characteristics of the shared extended reality environment; and a social criterion based on one or more characteristics of the second avatar.

Inventors:
ZOUROB MOHAMMED (SE)
KRISTENSSON ANDREAS (SE)
Application Number:
PCT/EP2022/058173
Publication Date:
October 05, 2023
Filing Date:
March 28, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
G06F3/01; A63F13/65; A63F13/87; G06Q10/06; G06Q50/22; G06T13/40; G06V20/50
Foreign References:
US20200099640A12020-03-26
US20110131509A12011-06-02
US20210166008A12021-06-03
Attorney, Agent or Firm:
ERICSSON (SE)
Download PDF:
Claims:
CLAIMS

1. A method of controlling a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user, the method comprising: receiving (109, 201) control input that indicates a behavioral action to be performed by the first avatar in the shared extended reality environment; detecting (111, 203) that the indicated behavioral action is a first predefined behavioral action, wherein the first predefined behavioral action is one of a plurality of different predefined behavioral actions; determining (111, 205) whether an information display criterion associated with the first predefined behavioral action is satisfied; and presenting (119, 207), on a display device associated with the first user, explanatory information related to the first predefined behavioral action when the information display criterion is satisfied, wherein the information display criterion comprises one or more of: an environmental criterion based on one or more characteristics of the shared extended reality environment; and a social criterion based on one or more characteristics of the second avatar.

2. The method of claim 1, comprising: determining (209) whether an inhibition criterion associated with the first predefined behavioral action is satisfied; and inhibiting (121, 209) the shared extended reality environment from being modified to include a performance of the first predefined behavioral action by the first avatar when the inhibition criterion is satisfied, wherein the inhibition criterion comprises one or more of: the environmental criterion based on the one or more characteristics of the shared extended reality environment; and the social criterion based on the one or more characteristics of the second avatar.

3. The method of claim 2, comprising: in response to determining that the information display criterion associated with the behavioral action is satisfied, performing: presenting a first prompting message on the display device associated with the first user; and receiving (213), from the first user, a response to the first prompting message, wherein the inhibition criterion further comprises: a user response criterion testing whether or not the response to the first prompting message indicates that the inhibition criterion is not satisfied.

4. The method of any one of the previous claims, wherein: the social criterion comprises a tolerance indicator associated with the second avatar, wherein the tolerance indicator indicates (211) whether or not the explanatory information related to the first predefined behavioral action is to be presented on the display device associated with the first user; and the information display criterion is not satisfied when the tolerance indicator indicates that the explanatory information related to the first predefined behavioral action is not to be presented on the display device associated with the first user.

5. The method of any one of the previous claims, comprising: obtaining a first offensiveness value associated with the first predefined behavioral action, wherein the first offensiveness value is one of a plurality of offensiveness values; and in response to determining that the information display criterion associated with the first predefined behavioral action is satisfied, performing: selecting, based on the first offensiveness value, a second prompting message from a plurality of second prompting messages; and presenting (215) the selected second prompting message on the display device associated with the first user.

6. The method of claim 5, wherein obtaining the first offensiveness value associated with the first predefined behavioral action further comprises: selecting, as the first offensiveness value, one of the plurality of offensiveness values based on a sensitivity indicator associated with the second user.

7. The method of any one of claims 5 and 6, wherein the selected second prompting message is a score that comprises a sum of a number of occurrences in which the received control input indicated the first predefined behavioral action.

8. The method of any one of claims 5 and 6, wherein the selected second prompting message is a score that comprises a sum of a number of occurrences over a completed span of time in which the received control input indicated the first predefined behavioral action.

9. The method of any one of claims 7 and 8, further comprising: inhibiting (115, 117) the first avatar from further sharing the extended reality environment with the second avatar for at least a predetermined amount of time when the score exceeds a first predefined threshold.

10. The method of any one of claims 7 through 9, further comprising: communicating, to a managing entity, a report relating to the first avatar when the score exceeds a second predefined threshold.

11. The method of any one of the previous claims, comprising: maintaining a log of information about incidents in which received control input directed the first avatar to perform a behavioral action that matched one or more of the plurality of different predefined behavioral actions; receiving an information request associated with the first avatar; and in response to the information request, accessing the log and outputting requested information, wherein the requested information is derived from information stored in the log.

12. The method of claim 11, further comprising: receiving a comment about an incident in which received control input directed the first avatar to perform a behavioral action that matched one or more of the plurality of different predefined behavioral actions; and modifying the log to include the received comment.

13. The method of any one of the previous claims, further comprising: presenting the first predefined behavioral action on an output device associated with the first avatar when the information display criterion associated with the first predefined behavioral action is satisfied.

14. The method of any one of the previous claims, wherein the one or more characteristics of the second avatar comprise one or more of: a cultural affiliation of the second avatar; and a geographical affiliation of the second avatar.

15. The method of any one of the previous claims, wherein the one or more characteristics of the shared extended reality environment comprise one or more of: whether the shared extended reality environment is a shared extended reality business environment; whether the shared extended reality environment is a shared extended reality dance environment; and whether the shared extended reality environment is a shared extended reality environment of a predefined geographical location.

16. A computer program (559) comprising instructions that, when executed by at least one processor (553), causes the at least one processor (553) to carry out the method according to any one of the previous claims.

17. A carrier comprising the computer program (559) of claim 16, wherein the carrier is one of an electronic signal, an optical signal, a radio signal, and a computer readable storage medium (555).

18. A controller (100, 200, 551) of a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user, the controller being configured to: receive (109, 201) control input that indicates a behavioral action to be performed by the first avatar in the shared extended reality environment; detect (111, 203) that the indicated behavioral action is a first predefined behavioral action, wherein the first predefined behavioral action is one of a plurality of different predefined behavioral actions; determine (111, 205) whether an information display criterion associated with the first predefined behavioral action is satisfied; and present (119, 207), on a display device associated with the first user, explanatory information related to the first predefined behavioral action when the information display criterion is satisfied, wherein the information display criterion comprises one or more of an environmental criterion based on one or more characteristics of the shared extended reality environment; and a social criterion based on one or more characteristics of the second avatar.

19. The controller (100, 200, 551) of claim 18, further configured to: determine (209) whether an inhibition criterion associated with the first predefined behavioral action is satisfied; and inhibit (121, 209) the shared extended reality environment from being modified to include a performance of the first predefined behavioral action by the first avatar when the inhibition criterion is satisfied, wherein the inhibition criterion comprises one or more of the environmental criterion based on the one or more characteristics of the shared extended reality environment; and the social criterion based on the one or more characteristics of the second avatar.

20. The controller (100, 200, 551) of claim 19, further configured to: in response to determining that the information display criterion associated with the behavioral action is satisfied, perform: presenting a first prompting message on the display device associated with the first user; and receiving (213), from the first user, a response to the first prompting message, wherein the inhibition criterion further comprises: a user response criterion testing whether or not the response to the first prompting message indicates that the inhibition criterion is not satisfied.

21. The controller (100, 200, 551) of any one of claims 18 through 20, wherein: the social criterion comprises a tolerance indicator associated with the second avatar, wherein the tolerance indicator indicates (211) whether or not the explanatory information related to the first predefined behavioral action is to be presented on the display device associated with the first user; and the information display criterion is not satisfied when the tolerance indicator indicates that the explanatory information related to the first predefined behavioral action is not to be presented on the display device associated with the first user.

22. The controller (100, 200, 551) of any one of claims 18 through 21, further configured to: obtain a first offensiveness value associated with the first predefined behavioral action, wherein the first offensiveness value is one of a plurality of offensiveness values; and in response to determining that the information display criterion associated with the first predefined behavioral action is satisfied, perform: selecting, based on the first offensiveness value, a second prompting message from a plurality of second prompting messages; and presenting (215) the selected second prompting message on the display device associated with the first user.

23. The controller (100, 200, 551) of claim 22, wherein being configured to obtain the first offensiveness value associated with the first predefined behavioral action further comprises being configured to: select, as the first offensiveness value, one of the plurality of offensiveness values based on a sensitivity indicator associated with the second user.

24. The controller (100, 200, 551) of any one of claims 22 and 23 wherein the selected second prompting message is a score that comprises a sum of a number of occurrences in which the received control input indicated the first predefined behavioral action.

25. The controller (100, 200, 551) of any one of claims 22 and 23, wherein the selected second prompting message is a score that comprises a sum of a number of occurrences over a completed span of time in which the received control input indicated the first predefined behavioral action.

26. The controller (100, 200, 551) of any one of claims 24 and 25, further configured to: inhibit (115, 117) the first avatar from further sharing the extended reality environment with the second avatar for at least a predetermined amount of time when the score exceeds a first predefined threshold.

27. The controller (100, 200, 551) of any one of claims 24 through 26, further configured to: communicate, to a managing entity, a report relating to the first avatar when the score exceeds a second predefined threshold.

28. The controller (100, 200, 551) of any one of claims 18 through 27, configured to: maintain a log of information about incidents in which received control input directed the first avatar to perform a behavioral action that matched one or more of the plurality of different predefined behavioral actions; receive an information request associated with the first avatar; and in response to the information request, access the log and outputting requested information, wherein the requested information is derived from information stored in the log.

29. The controller (100, 200, 551) of claim 28, further configured to: receive a comment about an incident in which received control input directed the first avatar to perform a behavioral action that matched one or more of the plurality of different predefined behavioral actions; and modify the log to include the received comment.

30. The controller (100, 200, 551) of any one of claims 18 through 29, further configured to: present the first predefined behavioral action on an output device associated with the first avatar when the information display criterion associated with the first predefined behavioral action is satisfied.

31. The controller (100, 200, 551) of any one of claims 18 through 30, wherein the one or more characteristics of the second avatar comprise one or more of: a cultural affiliation of the second avatar; and a geographical affiliation of the second avatar.

32. The controller (100, 200, 551) of any one of claims 18 through 31, wherein the one or more characteristics of the shared extended reality environment comprise one or more of: whether the shared extended reality environment is a shared extended reality business environment; whether the shared extended reality environment is a shared extended reality dance environment; and whether the shared extended reality environment is a shared extended reality environment of a predefined geographical location.

Description:
MONITORING OF AVATAR BEHAVIOR IN EXTENDED REALITY ENVIRONMENTS

BACKGROUND

The present invention relates to technology that enables the monitoring and/or control of avatar behavior in extended reality environments, such as augmented reality environments and virtual reality environments.

The use of so-called extended reality (XR) technology is becoming more and more widespread. The term XR technology is generic and includes, without limitation, Augmented Reality (AR) and Virtual Reality (VR) technology. Such equipment offers a wide range of applications including some for entertainment and others for improving and/or enhancing productivity.

With respect to the latter, more and more meetings are being conducted between individuals who are located apart from one another, and current XR equipment offers virtual tools for enhancing the meetings’ experience. Applications along these lines are currently available for HoloLens and Oculus Quest equipment.

The meeting environments in these applications are all either virtual or augmented reality, with each user being represented by an avatar. Users’ avatars are controlled by each user, and can perform almost any kind of gesture and movement. Where today in the real world, there are rules, norms and laws to govern and monitor individuals’ behavior so that it will conform with decency and politeness guidelines, it is quite difficult to monitor such behavior in the real world. While lack of compliance with decency and politeness does happen in the real world, individuals may be even more inclined to violate such rules and norms in a virtual environment under the rationale that it is not real. However, inappropriate behavior in a virtual environment could still raise the issues of cyberbullying and indecent behavior (bad gestures, sexual innuendos, etc.).

Currently, efforts to combat cyberbullying essentially target the written word. However, there will soon be a need to monitor the movement and behavior of individuals in XR (e.g., AR and/or VR) environments. This will be of utmost importance when those tools are used in professional environments such as companies, organizations, schools and even homes where parents are trying to make sure that their children are using the tool appropriately and that they are safe from this new type of cyberbullying.

While there are systems that can mask an inappropriate behavior or block it entirely, those systems don’t notify the users about why those gestures, words, and the like have been flagged. Hence, they can go on repeating the same behavior or sentences without being given the chance to learn about the wrongdoing of a gesture, word, and the like. While at first blush it may seem that a person ought to just know why, for example, a gesture is inappropriate, the inventors of the herein-described subject matter have realized, through inventive skill, that this does not take into account how the same gesture can take on different meanings in different cultures.

Thus, it would be a great benefit to provide a mechanism that teaches about the differences. Such an automated education system would be of benefit to the user not only in an XR environment, but also in real-life interactions.

There is therefore a need for technology that addresses the above and/or related problems.

SUMMARY

It should be emphasized that the terms “comprises” and “comprising”, when used in this specification, are taken to specify the presence of stated features, integers, steps or components; but the use of these terms does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.

Moreover, reference letters may be provided in some instances (e.g., in the claims and summary) to facilitate identification of various steps and/or elements. However, the use of reference letters is not intended to impute or suggest that the so-referenced steps and/or elements are to be performed or operated in any particular order.

In accordance with one aspect of the present invention, the foregoing and other objects are achieved in technology (e.g., methods, apparatuses, nontransitory computer readable storage media, program means) in which a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user is controlled.

In an aspect of some but not necessarily all embodiments consistent with the invention, the control comprises receiving control input that indicates a behavioral action to be performed by the first avatar in the shared extended reality environment and detecting that the indicated behavioral action is a first predefined behavioral action, wherein the first predefined behavioral action is one of a plurality of different predefined behavioral actions. A determination is made regarding whether an information display criterion associated with the first predefined behavioral action is satisfied and when it is, explanatory information related to the first predefined behavioral action is presented on a display device associated with the first user. The information display criterion comprises one or more of an environmental criterion based on one or more characteristics of the shared extended reality environment; and a social criterion based on one or more characteristics of the second avatar. In another aspect of some but not necessarily all embodiments consistent with the invention, controlling the shared extended reality environment further comprises determining whether an inhibition criterion associated with the first predefined behavioral action is satisfied; and inhibiting the shared extended reality environment from being modified to include a performance of the first predefined behavioral action by the first avatar when the inhibition criterion is satisfied. The inhibition criterion comprises one or more of the environmental criterion based on the one or more characteristics of the shared extended reality environment; and the social criterion based on the one or more characteristics of the second avatar.

In some but not necessarily all of such embodiments, controlling the shared extended reality environment comprises, in response to determining that the information display criterion associated with the behavioral action is satisfied, performing presenting a first prompting message on the display device associated with the first user; and receiving, from the first user, a response to the first prompting message, wherein the inhibition criterion further comprises a user response criterion testing whether or not the response to the first prompting message indicates that the inhibition criterion is not satisfied.

In yet another aspect of some but not necessarily all embodiments consistent with the invention, the social criterion comprises a tolerance indicator associated with the second avatar, wherein the tolerance indicator indicates whether or not the explanatory information related to the first predefined behavioral action is to be presented on the display device associated with the first user; and the information display criterion is not satisfied when the tolerance indicator indicates that the explanatory information related to the first predefined behavioral action is not to be presented on the display device associated with the first user.

In still another aspect of some but not necessarily all embodiments consistent with the invention, controlling the shared extended reality environment comprises obtaining a first offensiveness value associated with the first predefined behavioral action, wherein the first offensiveness value is one of a plurality of offensiveness values; and in response to determining that the information display criterion associated with the first predefined behavioral action is satisfied, performing selecting, based on the first offensiveness value, a second prompting message from a plurality of second prompting messages; and presenting the selected second prompting message on the display device associated with the first user.

In some but not necessarily all of such embodiments, obtaining the first offensiveness value associated with the first predefined behavioral action further comprises selecting, as the first offensiveness value, one of the plurality of offensiveness values based on a sensitivity indicator associated with the second user. Further, in another aspect of some but not necessarily all of such embodiments, the selected second prompting message is a score that comprises a sum of a number of occurrences in which the received control input indicated the first predefined behavioral action. Alternatively, the selected second prompting message is a score that comprises a sum of a number of occurrences over a completed span of time in which the received control input indicated the first predefined behavioral action.

Still further, in another aspect of some but not necessarily all of such embodiments, controlling the shared extended reality environment further comprises one or more of: inhibiting the first avatar from further sharing the extended reality environment with the second avatar for at least a predetermined amount of time when the score exceeds a first predefined threshold; and communicating, to a managing entity, a report relating to the first avatar when the score exceeds a second predefined threshold.

In another aspect of some but not necessarily all embodiments consistent with the invention, controlling the shared extended reality environment comprises maintaining a log of information about incidents in which received control input directed the first avatar to perform a behavioral action that matched one or more of the plurality of different predefined behavioral actions; and receiving an information request associated with the first avatar. In response to the information request, the log and outputting requested information is accessed, wherein the requested information is derived from information stored in the log. In some but not necessarily all of such embodiments, a comment is received about an incident in which received control input directed the first avatar to perform a behavioral action that matched one or more of the plurality of different predefined behavioral actions; and the log is modified to include the received comment.

In yet another aspect of some but not necessarily all embodiments consistent with the invention, controlling the shared extended reality environment comprises presenting the first predefined behavioral action on an output device associated with the first avatar when the information display criterion associated with the first predefined behavioral action is satisfied.

In still another aspect of some but not necessarily all embodiments consistent with the invention, the one or more characteristics of the second avatar comprise one or more of: a cultural affiliation of the second avatar; and a geographical affiliation of the second avatar.

In another aspect of some but not necessarily all embodiments consistent with the invention, the one or more characteristics of the shared extended reality environment comprise one or more of: whether the shared extended reality environment is a shared extended reality business environment; whether the shared extended reality environment is a shared extended reality dance environment; and whether the shared extended reality environment is a shared extended reality environment of a predefined geographical location.

BRIEF DESCRIPTION OF THE DRAWINGS

The objects and advantages of the invention will be understood by reading the following detailed description in conjunction with the drawings in which:

Figure 1 is in one respect a flowchart of actions taken by a behavior monitoring system that controls a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user.

Figure 2 is, in one respect, a flowchart of actions taken by a behavior monitoring system that controls a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user.

Figure 3 is, in one respect, a flowchart of actions taken by a behavior monitoring system operating in an administrator mode, in which a user is provided the ability to store and/or modify predefined behavioral actions that are to be monitored.

Figure 4A illustrates an exemplary portion of a relational database that is used by the system to obtain information about different characteristics that are used by an information display criterion and an inhibition criterion.

Figure 4B illustrates a portion of an exemplary database that can be used in conjunction with the database and that contains profile characteristics of individual avatars.

Figure 5 is a block diagram of an exemplary XR headset including its controller in accordance with some but not necessarily all exemplary embodiments consistent with the invention.

DETAILED DESCRIPTION

The various features of the invention will now be described in connection with a number of exemplary embodiments with reference to the figures, in which like parts are identified with the same reference characters.

To facilitate an understanding of the invention, many aspects of the invention are described in terms of sequences of actions to be performed by elements of a computer system or other hardware capable of executing programmed instructions. It will be recognized that in each of the embodiments, the various actions could be performed by specialized circuits (e.g., analog and/or discrete logic gates interconnected to perform a specialized function), by one or more processors programmed with a suitable set of instructions, or by a combination of both. The term “circuitry configured to” perform one or more described actions is used herein to refer to any such embodiment (i.e., one or more specialized circuits alone, one or more programmed processors, or any combination of these). Moreover, the invention can additionally be considered to be embodied entirely within any form of non-transitory computer readable carrier, such as solid-state memory, magnetic disk, or optical disk containing an appropriate set of computer instructions that would cause a processor to carry out the techniques described herein. Thus, the various aspects of the invention may be embodied in many different forms, and all such forms are contemplated to be within the scope of the invention. For each of the various aspects of the invention, any such form of embodiments as described above may be referred to herein as “logic configured to” perform a described action, or alternatively as “logic that” performs a described action.

Embodiments consistent with the herein-described technology provide a kind of XR firewall that prevents the user from communicating undesirable/unethical behavior to others who are sharing the virtual environment by monitoring aspects of the behavior of the user’s avatar and recognizing when those aspects match one or more predefined aspects, and taking certain steps in response when they do.

In an aspect of some but not necessarily all embodiments, an initial responsive step seeks to deter the user from repeating the recognized behavioral aspects by playing back the recognized movement to the user, so that they can recognize what it is and that it should not be repeated.

In a related aspect of some embodiments, a message/notification (e.g., a pop up message) is presented to the user based on the analyzed image, text, gesture, and the like that asks the user to confirm that they want the flagged image, text, gesture, and the like to be presented/performed/communicated to other user avatars in the XR environment. For example, the user can be asked, “Are you sure you want to communicate this to said other part?” to give the user a chance to rethink what they are about to send and also mitigate “by mistake” images, texts, or gestures from being communicated to others. The message presented to the user can be based on the analysis of the material.

In another aspect of some embodiments, a real-time score of how negative or positive an image or text or gesture may be to others can be displayed to the user. This score can be, in some embodiments, a score of an avatar’s behavior accumulated over one session. This gives the user a chance to improve their behavior (e.g., increase compliance with expected behavioral norms) in real-time. Alternatively or in addition, a behavior score can be accumulated across multiple sessions. Going beyond a certain violation threshold may result in further measures being taken, such as but not limited to banning the user from a next session.

In still another aspect of some but not necessarily all embodiments, the system controlling the shared XR environment can play back the flagged behavior not only to the user responsible for the behavior but also to one or more others in the shared XR environment in order to educate all of them about, for example, cultural differences.

In another aspect of some but not necessarily all embodiments consistent with the invention, a second level of deterrence is provided in the form of reporting the incident to a responsible entity, such as a direct line manager in the context of a work setting. In some but not necessarily all such embodiments, the report is communicated only after several incidents within a set period of time have accumulated under the user’s profile.

In still another aspect of some but not necessarily all embodiments consistent with the invention, some well-defined and easy to detect gestures are blocked by the system once it detects a sequence of movements that would otherwise cause the system to display an impolite or violating gesture. In some but not necessarily all of such embodiments, the recognized sequence’s proximity to other gestures are used to identify whether or not a certain gesture should be banned, given the context in which they occurred. For example, some otherwise violating gestures might be acceptable when performed during dance movements.

Various aspects of inventive embodiments will now be described with reference to Figure 1, which is in one respect a flowchart of actions taken by a behavior monitoring system that controls a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user. In other respects, the blocks depicted in Figure 1 can also be considered to represent means 100 (e.g., hardwired or programmable circuitry or other processing means) for carrying out the described actions.

At step 101, a shared extended reality (XR) session (e.g., a virtual reality -VR - or augmented reality - AR- session) in which the shared extended reality environment resides is started (step 101). A number of initialization steps are taken, such as setting session tolerance level based on such parameters as a participant list, an event class, and the like (step 103) and initializing the monitoring functionality (step 105). In some but not necessarily all embodiments, users are notified that they will be monitored by the system. After initialization is complete, the above-mentioned shared XR environment is established and maintained (e.g., by an application run by an application processor) (step 107). These actions are quite application-specific so a thorough description is beyond the scope of this disclosure. Of relevance is that control input received from the user is monitored (step 109). When the user performs a behavioral action (e.g., a gesture) it is tested to determine whether that action would violate system-implemented restrictions regarding what behavioral actions are permissible (decision block 111). If no user action violation is detected (“No” path out of decision block 111) then an avatar representing the user in the shared XR environment is caused to carry out the user action (step 113). Processing then reverts back to step 109 where further actions by the user are monitored.

If it is detected that the user-supplied behavioral action will violate system-implemented restrictions (“Yes” path out of decision block 111) then information about the violation is presented to the user (and optionally to other users) (step 119). The information may, for example, educate the user about why the gesture was considered a violation. Such information is especially helpful when the user has performed a behavioral action that is perfectly acceptable among their peers and in a cultural environment in which they reside, but may be considered offensive to another user who is also present in the shared XR environment. The information may be presented to the user and any of a number of ways, such as but not limited to a pop-up message appearing in the shared XR environment, or by sending the user an email notification with a clip illustrating the violating behavioral action and explaining why it was flagged as a violation. Alternatively, the email might not be so specific, and instead made nearly present general advice about acceptable behavior and/or giving the user a course on system rules, and the like.

The user’s avatar may or may not be prevented from performing the violating action, depending upon contextual circumstances (step 113). In one nonlimiting example, the user is informed about the violation, but is then prompted with the question “Do you want to repeat the gesture?” If the user supplies a response indicating “yes”, then the system permits the violating behavioral action to be performed by the user’s avatar within the shared XR environment.

In some but not necessarily all embodiments, additional actions are taken. For example, after detecting a user action violation, a violation counter may be incremented and then tested against a threshold value (decision block 115). If the violation count exceeding a predefined threshold value (“Yes” path out of decision block 115) then some further action might be taken in response, such as but not limited to ending the user’s participation in the session (step 117). In some embodiments, the user may additionally be presented with information intended to educate them about the violation and the system’s rules governing acceptable behavior.

In another example, in addition to presenting the information related to the violating behavior, the violating behavioral action might be filtered out so that the user’s avatar is inhibited from performing it in the shared XR environment (step 121).

Further aspects of exemplary embodiments consistent with the invention will now be described with reference to Figure 2 which is, in one respect, a flowchart of actions taken by a behavior monitoring system that controls a shared extended reality environment that includes a first avatar controlled by a first user and a second avatar controlled by a second user. In other respects, the blocks depicted in Figure 2 can also be considered to represent means 200 (e.g., hardwired or programmable circuitry or other processing means) for carrying out the described actions.

In step 201, control input that indicates a behavioral action to be performed by the first avatar in the shared extended reality environment is received by the system from the first user.

The system then tests whether the indicated behavioral action is one of a plurality of predefined behavioral actions (decision block 203). If not (“No” path out of decision block 203), then processing reverts back to step 201 to await more control input from the first user.

If it is detected that the indicated behavioral action is one of the plurality of predefined behavioral actions (“Yes” path out of decision block 203), then in some embodiments explanatory information about the detected predefined behavioral action is presented to the first user (step 207). This may take the form any of the methods discussed earlier with reference to Figure 1.

In some alternative embodiments, after detecting that the indicated behavioral action is one of the plurality of predefined behavioral actions (“Yes” path out of decision block 203), a further test is performed to determine whether an information display criterion associated with the first predefined behavioral action is satisfied (decision block 205). The information display criterion in some but not necessarily all embodiments comprises one or more of: an environmental criterion based on one or more characteristics of the shared extended reality environment; and a social criterion based on one or more characteristics of the second avatar.

The use of an information display criterion in this way is in recognition of the fact that, although a particular behavioral action may be inappropriate in a general sense, it may be acceptable in other contexts (e.g., environmental contexts such as whether or not the shared XR environment is a business environment or an informal dance, or in what geographic region it is being performed; and/or social contexts such as whether or not the first and second users are good friends, or if the second user has set a profile parameter specifically indicating that they have a high tolerance for otherwise objectionable behavior).

Accordingly, if the information display criterion associated with the detected predefined behavioral action is not satisfied (“No” path out of decision block 205), then the detected predefined behavioral action will in this instance be considered acceptable, and no educational information about it will be presented to the user, nor in this embodiment is its performance in the shared XR environment to be inhibited. Instead, processing reverts back to step 201 where the system awaits further control input from the first user.

However, if the information display criterion associated with the detected predefined behavioral action is satisfied (“Yes” path out of decision block 205), then explanatory information related to the first predefined behavioral action is presented in some way to the first user (e.g. by presenting it in a display device associated with the first user) (step 207).

In another aspect of some but not necessarily all embodiments, the system conditionally inhibits modification of the shared XR environment from including a performance of the detected predefined behavioral action by the first user’s first avatar (step 209). There are many possible conditions that might be tested in particular embodiments in order to decide whether or not to inhibit performance of the detected predefined behavioral action. A few nonlimiting examples are:

- Has the second user indicated a high level of tolerance for the particular predefined behavioral action? (Step 211)

- Has the first (or second) user specifically indicated that they want the gesture to be performed in the shared XR environment despite being informed about its violating nature? (Step 213)

- Does an “offensiveness level” for this particular predefined behavioral action satisfy a predefined threshold value? (Step 215) In such embodiments, the user might be prompted to indicate whether or not they want to go ahead with the behavioral action.

Additional aspects of some but not necessarily all embodiments consistent with the invention are described in the following points: As mentioned earlier, when one of the predefined behavioral actions is detected as input from the first user, the system may present that user with a pop up message that that is selected based on the analyzed image, text, or gesture, and asking the user a question such as, “Are you sure you want to communicate this to said other user?” Such a question gives the user a chance to reconsider what they are about to send and also mitigate images, texts, or gestures that have been made by mistake. This message is preferably based on an analysis of detected input from the user, so that it can be contextually appropriate. In some but not necessarily all further embodiments, a real-time score is displayed to the user, indicating how negative or positive an image or text or gesture may be. A score may also be maintained in presented that shows how many times the user’s input caused a behavioral action violation. This score could also be the accumulative score of the user’s Avatar during one session, which will give the user a chance to improve their behavior in real-time as well across one session or multiple sessions. Alternatively, a “violation” score may be accumulated over some number of complete sessions (i.e., sum of a number of occurrences over a completed span of time in which the received control input indicated the first predefined behavioral action). Going beyond a certain threshold may cause the user to be banned from the next session for example. In some but not necessarily all further embodiments, when the user has accumulated a number of behavior violations that satisfies a predetermined reporting threshold, the user is reported to some established authority. In more nuanced embodiments, the level and/or type of relationship between the users is taken into account when the number of violations is considered (e.g., a higher violation count may be tolerated in the case of two good friends interacting with each other). In yet another aspect of some but not necessarily all embodiments, the system is configured to give users a chance to comment on certain incidents and access previous incidents in order to review them and learn why a certain incident was flagged. XR (e.g., AR/VR) firewall: The system monitors the behavior of avatars in XR environments and recognizes when it violates predefined rules (e.g., designed to prevent unethical or offensive behavior). As described earlier, when a violating behavior is detected, personal feedback is provided to the user as a first measure of deterrence. In addition, in some but not necessarily all embodiments, the avatar’s behavior is replayed to the user so that the user will know exactly what move was flagged. In some embodiments, a second level of deterrence is provided by supplying an automated report to an authority figure, such as the user’s direct line manager. A generic set of moves is stored in a database as generally offensive movements that are to be tracked. Avatar line of sight, proximity to other avatars, and in some higher complexity systems, cultural norms are taken into consideration as well. Parental control/Schools/Universities: When shared XR environments are used in an educational setting, parents and teachers can advantageously use a monitoring system as described herein because it acts as a deterrent for users who might try to abuse the tool and, for example, use it to harass other users. When applied in such settings, a predetermined set of moves will be monitored by the system, as well as avatar’s line of sight and proximity to other avatars. Information about a given avatar’s line of sight (which can broadly include anything displayed to a user) can be used by the system to, for example, determine whether the behavior of other avatars has the possibility of offending the given avatar. Two levels of reporting can be used, where an incident would initiate a warning message to the offending user, and after a collection of incidents, a report is sent to the overseeing body (e.g., parents, university school body, etc.) to take action. If a user then continues to violate the established rules of behavior, a third deterrent can for example involve the system taking action by starting to block movement or gesture combinations that might be perceived as offensive, and/or causing such movements or gestures to be blocked from being recreated (i.e., performed) at the receiver end. Friendliness level is another parameter that is taken into consideration as well in setting thresholds on what is to be displayed or blocked. In other embodiments of the avatar monitoring system, it is configured to enhance the user experience by either facilitating usage or adapting to certain conditions. This class of embodiments can be an option that is to be enabled by the user. Exemplary embodiments can include one or more of: a. Gesture shortcuts: A predefined sequence of movements performed by the user causes a slogan to be displayed over the top of the user’s avatar. For example, a Japanese bow would mean hello or respect, and the like, and the displayed slogan can say this. Initiating a combination of movements triggers the system to display the intended sign or gesture or slogan. b. Avatar usages can be adapted based on the culture, so if a left-handed usage is culturally expected and the user is a right-handed user, then the user can keep using the right hand, but the system “translates” the movement so that the user’s Avatar will move the left hand to stay compliant with the cultural rules.

In another example of cultural adaptation, an avatar’s mode of dress is adapted based on scenario or environment (e.g., church, company, home, etc.): A user might configure their avatar to be displayed in a certain type of clothing that might not be always appropriate for the virtual venue. To address this situation, in accordance with some exemplary embodiments, a feature can be enabled that causes the user’s avatar to be displayed in whatever clothing is appropriate/compliant for the virtual venue. In some but not necessarily all of such embodiments, the system can cause the user’s own perception of their avatar to be unaffected, such that the user still sees their avatar in the clothing of their choice, while others see the avatar in the adapted clothing. In general, the detection and firewalling can be applied at the gesture source, at an image or cloud, or at the gesture receiving side. In aspects of some but not necessarily all alternative embodiments, the system can be configured to allow anyone trusted by the user to be let into their “safe zone” in the shared XR environment, meaning that the avatar would be allowed (if the device permits) to perform any gestures. On the other hand, the system would also maintain a blacklist that includes identities of known users/avatars that should never be let into the user’s sphere or interaction zone. In still further alternative embodiments, the system may treat an area in between the two zones as a transitional one, in which some rules are a bit relaxed, but monitoring is still very active and immediate actions could be taken to limit any bad behavior. This could be applied on a per user/avatar basis. In still another aspect of some but not necessarily all embodiments consistent with the invention, in order to get around false flags and also give users a chance to correct an impulsively made gesture, the system could, upon detection of a prohibited gesture/move, display a message warning the user before pushing the gesture to other users. The message can be, for example, “Are you sure you want to send the latest gesture as it might be insulting to the other user?” 11. In yet another aspect of some but not necessarily all embodiments consistent with the invention, the system is configured to use its various sensors to collect information about a user’s current state of mind (sad, angry, happy, etc.), as indicated by the different sensory inputs it has, and to respond by presenting the user with some particular avatar movements that have been stored in the system’s library, with the suggestion that the user choose one of them for the avatar to perform. For example, a determination that the user is happy can trigger a few related avatar movements (e.g., clapping hands, hands in the air, “high five”, and the like) to be suggested.

Further aspects of exemplary embodiments consistent with the invention will now be described with reference to Figure 3 which is, in one respect, a flowchart of actions taken by a behavior monitoring system operating in an administrator mode, in which a user is provided the ability to store and/or modify predefined behavioral actions that are to be monitored (e.g., as described above with reference to Figure 2). In other respects, the blocks depicted in Figure 3 can also be considered to represent means 300 (e.g., hardwired or programmable circuitry or other processing means) for carrying out the described actions.

In step 301, the user enters administrator credentials, since it is advantageous that only authorized individuals be able to define what behaviors are to be monitored. The credentials are checked against the expected ones (decision block 303) and if they are not correct (“No” path out of decision block 303) then the system remains in a locked state and awaits further entry of administrator credentials at step 301.

But if the entered credentials are correct (“Yes” path out of decision block 303), the user indicates whether they desire to delete, modify, or add moves (step 305) and the system responds accordingly. For example, if the user wishes to add a new behavioral action to be monitored, processing continues at step 307 where the system presents an interface that allows entry of relevant information. This includes, for example, the user inputting a name for the behavior/move/gesture (step 309), and then inputting various characteristics (e.g., environmental, social, tolerance level, etc.) that are relevant to the newly added behavior (step 311). In some but not necessarily all embodiments, the interface for allowing entry of relevant information may also show the user a list of currently defined behavioral actions. This may help prevent the user from again adding an already existing behavior/move/gesture (e.g., in case the user forgot or was unaware that it already existed in the system). The user is then prompted to perform the move or gesture some number of times, and as the user does this the system captures the control input from the various sensors that are detecting these movements (step 313).

With each repetition of the new behavioral action, the system not only captures new sensor data, but also determines whether it would have detected the behavior based on the information it already has. If its percentage of correct matches does not reach a given threshold value (“No” path out of decision block 315) than the system is not yet performing well enough, so training continues with processing reverting back to step 313.

Once the percentage of correct matches reaches or exceeds the given threshold value (“Yes” path out of decision block 315), training is considered complete and processing reverts back to step 305 where the user can indicate a next action they would like to take.

Returning now to a discussion of block 305, if the user indicates that they want to modify or delete an already-defined behavioral action, a list of existing predefined behavioral actions is displayed to the user (step 319) and the user is asked to select one. Upon making a selection, the user is asked whether they want to delete or modify the selected predefined behavioral action (decision block 321). If they respond with “delete” (“Delete” path out of decision block 321), then the system removes entries pertaining to the predefined Delete action (step 323) and processing reverts back to step 305 so the user can continue their session or quit.

If at decision block 321 the user instead indicates that they want to modify selected behavioral action, processing continues at step 309 where the user interacts with the system in the same way as when a new behavioral action is being defined: it is given a name (step 309), associated characteristics are entered (step 311), and the user read trains the system to recognize the behavioral action (step 313 and 315). When the modification is complete, processing reverts back to step 305 as described earlier.

Further aspects of embodiments consistent with the invention will now be described with reference to Figures 4A and 4B. Looking first at Figure 4A, this is an exemplary portion of a relational database 401 that is used by the system. In the illustrated portion, different characteristics that are used by an information display criterion and an inhibition criterion are defined along with other parameters that they are related to. As described earlier, the information display criterion is used to determine whether or not information relating to a detected one of a plurality of different predefined behavioral actions will be displayed user. The inhibition criterion similarly determines whether or not a detected one of the plurality of different predefined behavioral actions will be performed in the shared XR environment. Each one of these criteria comprises one or more of an environmental criterion that is based on one or more characteri sties of the shared extended reality environment; and a social criterion that is based on one or more characteristics of an avatar.

Accordingly, the exemplary portion of the relational database 401 includes entries for environmental characteristics and also for avatar characteristics. Environmental characteristics pertain to the shared XR environment itself, and to illustrate this point three different types of environments are defined: “Business”, “Country 1”, and “Dance”. The entries in the illustrated database 401 show that when the environment is a “Business” environment, three behaviors are relevant: “Behavior 1”, “Behavior 2”, and “Behavior 3”. The entries further show that each of these behaviors is not permitted in a business environment.

The entries in the illustrated database 401 similarly show that when the environment is “Country 1”, three behaviors (“Behavior 1”, “Behavior 3”, and “Behavior 5”) are not permitted.

The third illustrated environment is a “Dance” setting, and here the database 401 shows that “Behavior 3”, which is not permitted either in a business setting or in Country 1, is expressly permitted when performed in the context of a dance.

Avatar characteristics pertain to social aspects, and to illustrate this point, the exemplary database 401 shows that an avatar can have a cultural affiliation (illustrated here as “Culture 1”), a nationality/country affiliation (illustrated here as “Country 1”), and an individual identity (i.e., particular individuals/avatars can be defined in the database 401). In this example, if an avatar is affiliated with “Culture 1”, then “Behavior 3” is not permitted (i.e., other avatars should not perform “Behavior 3” where an avatar affiliated with “Culture 1” can experience it (e.g., see or hear it)).

The database 401 also illustrates that if an avatar is affiliated with “Country 1”, the same three behaviors (i.e., “Behavior 1”, “Behavior 3”, and “Behavior 5”) that are not permitted inside “Country 1” are also not permitted in front of this particular avatar.

The exemplary database 401 also shows that if an avatar happens to be an individual herein identified as “Charlie”, any of the behaviors, “Behavior 1”, “Behavior 2”, “Behavior 3”, and “Behavior 4” would be permitted even if other characteristics (e.g. the environment is a business setting) would cause them to be prohibited. This illustrates an aspect of some but not necessarily all embodiments in which some characteristics have a higher priority over others, so when there is a conflict between whether or not a behavior is permitted, the characteristic having the higher priority is the deciding factor.

Figure 4B similarly illustrates a portion of an exemplary database 403 that can be used in conjunction with the database 401. The exemplary database 403 contains profile characteristics of individual avatars. For example, Avatar 1 has two defined characteristics: a name (“Barbara”), and an affiliated country (“Country 1”). Based on these characteristics, any behavioral rules associated with entries for “Barbara” or for “Country 1” in the exemplary database 401 will apply to Avatar 1. For example, since Avatar 1 is affiliated with “Country 1”, the rules for “Country 1” will be consulted whenever Avatar 1 is present in a shared XR environment. So at least initially, it is found that none of the behaviors “Behavior 1”, “Behavior 3”, and “Behavior 5” should be performed in the presence of Avatar 1. But in this particular example, the profile for Avatar 1 also shows that Avatar 1 is tolerant of two of the otherwise prohibited behaviors, namely, “Behavior 1” and “Behavior 3”, and this rule will take a higher priority over the more general rule that applies to an entire country. However, the profile for Avatar 1 expressly shows that “Behavior 5” is not permitted, which is the same result that the system would arrive at based on the “Country 1” characteristic.

It will be appreciated that the databases 401 and 403 illustrated in Figures 4A and 4B are merely examples, and that in practice the databases can take on many different forms. The form and design of such databases are application-specific, and a complete description of them is therefore beyond the scope of this disclosure. However, a person of ordinary skill in the art will readily understand how to make and use a database that is compatible with various embodiments consistent with the invention described herein.

Aspects of an exemplary XR headset 501 configured in accordance with aspects of inventive embodiments will now be discussed with reference to Figure 5. The XR headset 501 includes a number of components that are known in the art, and therefore need not be described here in detail. These include but are not limited to a number of sensors 503, a headset display 505, wired and/or wireless interface(s) 507, 509 for connection to external components, power source and power management circuitry 511, and a headset controller 513. The headset controller 513 can, in some embodiments, include a sensors/display controller 515 and an application/monitoring processor 517. A headset controller memory 519 is also provided to store programs and data for one or both of the sensors/display controller 515 and application processor 517.

The headset controller 513 can be configured to control various aspects of the XR headset 501 to support the XR environment for the user. In addition, in some but not necessarily all embodiments, the headset controller 513 (e.g., the application/monitoring processor 517) is also configured to carry out the various monitoring and related operations described throughout this description in conjunction with Figures 1-3, 4A and 4B. Alternatively, aspects such as the monitoring and related operations described throughout this description in conjunction with Figures 1-3, 4A and 4B can be implemented in an application/monitoring controller 551 that is external to the XR headset 501. The application/monitoring controller 551 includes an interface 557 for connecting to the XR headset 501. The interface 557 may be wired, or wireless (e.g. via wireless functionality 561).

The application/monitoring controller 551 is configured to cause any and/or all of the above-described actions to be performed as discussed in connection with Figures 1-3, 4A and 4B. The application/monitoring controller 551 includes circuitry configured to carry out any one or any combination of the various functions described above. Such circuitry could, for example, be entirely hard-wired circuitry (e.g., one or more Application Specific Integrated Circuits - “ASICs”). Depicted in the exemplary embodiment of Figure 5, however, is programmable circuitry, comprising a processor 553 coupled to one or more memory devices 555 (e.g., Random Access Memory, Magnetic Disc Drives, Optical Disk Drives, Read Only Memory, etc.) and to the interface 557 that enables bidirectional communication with the XR headset 501 as discussed above. The memory device(s) 555 store program means 559 (e.g., a set of processor instructions) configured to cause the processor 553 to control other system elements so as to carry out any of the aspects described above. The memory device(s) 555 may also store data (not shown) representing various constant and variable parameters as may be needed by the processor 553 and/or as may be generated when carrying out its functions such as those specified by the program means 559.

The application/monitoring controller 551 need not be embodied as a standalone unit. It can, for example, be implemented in the processing capability of a smart phone or tablet that the XR headset 501 is connected to. In yet other alternatives, the various aspects of exemplary embodiments as described herein can reside in the edge or cloud that a smart phone paired with the XR headset 501 communicates with.

The system can store the predefined behaviors (e.g., gestures or movements) in the same secure area where passwords and fingerprints are stored, so that they cannot be easily bypassed.

Embodiments consistent with the invention provide a number of advantages over existing systems. For example by providing information to a user who has performed a particular behavioral action in a certain context (e.g., environmental or social), that user (and in some embodiments other users within the same shared XR environment) can learn about cultural differences and learn how to correct their behavior. This is especially helpful when done in advance of an actual in-person meeting that will be held under the same circumstances (e.g., in the same cultural environment as the shared XR environment).

Additionally, embodiments consistent with the invention create a safe environment for anyone and everyone to use XR (e.g., AR/VR) avatars while pre-emptively blocking any attempts for possible cyberbullying in all types of environments, and while still detecting and educating users about such behavior so that they can avoid it in the future.

The invention has been described with reference to particular embodiments. However, it will be readily apparent to those skilled in the art that it is possible to embody the invention in specific forms other than those of the embodiment described above. Thus, the described embodiments are merely illustrative and should not be considered restrictive in any way. The scope of the invention is further illustrated by the appended claims, rather than only by the preceding description, and all variations and equivalents which fall within the range of the claims are intended to be embraced therein.