Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
RIGHTS MANAGEMENT IN AN EXTENDED REALITY ENVIRONMENT
Document Type and Number:
WIPO Patent Application WO/2024/077227
Kind Code:
A1
Abstract:
A user may be represented by an avatar in the extended reality environment. The user may set configuration options such as privacy restrictions, rating controls, or invisibility so that the user can explore the extended reality environment while controlling how other users are able to interact with the user's avatar. A client-side digital rights management (DRM) component can control which elements are rendered for the user. A client-side controller can control the data that is sent to an extended reality server. The user's avatar may be protected by DRM. Thus, other users will be able to see the user's avatar only if granted permission to do so.

Inventors:
SHANWARE AJIT (US)
STEVENS CHARLES (CH)
ISBILIROGLU MEHMET HAKAN (CH)
VAN BOVEN CHRISTIAN (CH)
Application Number:
PCT/US2023/076219
Publication Date:
April 11, 2024
Filing Date:
October 06, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NAGRAVISION SARL (CH)
SHANWARE AJIT (US)
International Classes:
G06F21/60; G06F3/01; G06F3/04815; G06F21/62; G06F21/00
Foreign References:
US20220214743A12022-07-07
Attorney, Agent or Firm:
PERDOK, Monique M. et al. (US)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A system comprising: a memory that stores instructions; and one or more processors configured by the instructions to perform operations comprising: accessing a configuration option that controls a privacy setting for a user represented by an avatar in an extended reality environment; accessing data for the user that indicates a location of the avatar in the extended reality environment; based on the configuration option, modifying the location by applying a rounding function to the location; and sending the modified location to the computing device.

2. The system of claim 1, wherein the operations further comprise: based on the configuration option, sending an indication to a computing device that the avatar for the user should not be visible to other users.

3. The system of claim 1, wherein the operations further comprise: based on a second configuration option, modifying a frequency at which data is sent to the computing device.

4. The system of claim 1, wherein the sending of the indication to the computing device is further based on a location of the avatar in the extended reality environment.

5. The system of claim 1, wherein the operations further comprise: based on a second configuration option, modifying pose data of the avatar by substituting a different pose for the avatar before sending the pose data to the computing device.

6. The system of claim 1, wherein the operations further comprise: based on a second configuration option, modifying pupil data for the user before sending the pupil data to the computing device.

7. The system of claim 1, wherein the operations further comprise: based on a second configuration option, modifying eye focal point data for the user before sending the eye focal point data to the computing device.

8. The system of claim 1, wherein the operations further comprise: based on a second configuration option, modifying motion data for the user before sending the motion data to the computing device.

9. The system of claim 1, wherein the operations further comprise: based on a determination that a second avatar is in a muted gallery, refraining from rendering the second avatar.

10. A non-transitory machine-readable medium that stores instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: accessing a configuration option that controls a privacy setting for a user represented by an avatar in an extended reality environment; accessing data for the user that indicates a location of the avatar in the extended reality environment; modifying the location by applying a rounding function to the location; and based on the configuration option, sending the modified location to the computing device.

11. The non-transitory machine-readable medium of claim 10, wherein the operations further comprise: based on the configuration option, sending an indication to a computing device that the avatar for the user should not be visible to other users.

12. The non-transitory machine-readable medium of claim 10, wherein the operations further comprise: based on a second configuration option, modifying a frequency at which data is sent to the computing device.

13. The non-transitory machine-readable medium of claim 10, wherein the sending of the indication to the computing device is further based on a location of the avatar in the extended reality environment.

14. The non-transitory machine-readable medium of claim 10, wherein the operations further comprise: based on a second configuration option, modifying pose data of the avatar by substituting a different pose for the avatar before sending the pose data to the computing device.

15. The non-transitory machine-readable medium of claim 10, wherein the operations further comprise: based on a second configuration option, modifying pupil data for the user before sending the pupil data to the computing device.

16. The non-transitory machine-readable medium of claim 10, wherein the operations further comprise: based on a second configuration option, modifying eye focal point data for the user before sending the eye focal point data to the computing device.

17. The non-transitory machine-readable medium of claim 10, wherein the operations further comprise: based on a second configuration option, modifying motion data for the user before sending the motion data to the computing device.

18. A method comprising: accessing, by one or more processors of a client device, a configuration option that controls a privacy setting for a user represented by an avatar in an extended reality environment; accessing data for the user that indicates a location of the avatar in the extended reality environment; modifying the location by applying a rounding function to the location; and based on the configuration option, sending the modified location to the computing device.

19. The method of claim 18, further comprising: based on the configuration option, sending an indication to a computing device that the avatar for the user should not be visible to other users.

20. The method of claim 18, further comprising: based on a second configuration option, modifying a frequency at which data is sent to the computing device.

Description:
RIGHTS MANAGEMENT IN AN EXTENDED REALITY ENVIRONMENT

PRIORITY APPLICATION

[0001] This application claims priority to U. S. Provisional Patent Application Serial Number 63/378,713, filed on October 7, 2022, the disclosure of which is incorporated by reference herein in its entirety.

FIELD

[0002] The embodiments discussed herein are related to rights management in an extended reality environment. In some embodiments, a user is enabled to control access rights related to the user’s avatar.

BACKGROUND

[0003] Virtual reality environments are three-dimensional computer-generated simulations that can be interacted with in a seemingly real way by a person using special equipment such as a helmet with a screen inside, gloves fitted with sensors, hand-held devices that can be motion-tracked, or any suitable combination thereof. Thus, the user interacts with a “virtual” world.

[0004] Augmented reality environments make use of a real-world view with superimposed computer-generated images. The computer-generated images may include information about real -world objects, thus “augmenting” the real world. The user may be enabled to interact with the computer-generated portion of the display. This may be termed “mixed reality,” as the user is enabled to interact both with real -world objects and with virtual objects. The term “extended reality” encompasses virtual reality, augmented reality, and mixed reality.

[0005] One form of extended reality is a metaverse. A metaverse is a massively scaled and interoperable network of real-time rendered three-dimensional virtual worlds that can be experienced synchronously and persistently by an effectively unlimited number of users. Each user may have an avatar in the metaverse with a definite location, appearance, identity, and history.

BRIEF DESCRIPTION OF THE DRAWINGS

[0006] Example embodiments will be described and explained with additional specificity and detail through the use of the accompanying drawings.

[0007] FIG. 1 is a network diagram illustrating a network environment suitable for implementing rights management in an extended reality environment, according to some example embodiments.

[0008] FIG. 2 is a block diagram of an extended reality server, according to some example embodiments, suitable for implementing rights management in an extended reality environment.

[0009] FIG. 3 is a block diagram of a client device, according to some example embodiments, suitable for implementing rights management in an extended reality environment.

[0010] FIG. 4 is a block diagram showing data flow in rendering an extended reality environment subject to rights management, according to some example embodiments. [0011] FIG. 5 is a flowchart illustrating operations of a method suitable for implementing rights management in an extended reality environment, according to some example embodiments.

[0012] FIG. 6 is a flowchart illustrating operations of a method suitable for implementing rights management in an extended reality environment, according to some example embodiments.

[0013] FIG. 7 is a flowchart illustrating operations of a method suitable for implementing rights management in an extended reality environment, according to some example embodiments.

[0014] FIG. 8 illustrates a diagrammatic representation of a machine in an example form of a computing device within which a set of instructions, for causing the machine to perform any one or more of the methods discussed herein, may be executed.

DETAILED DESCRIPTION OF THE DRAWINGS

[0015] Aspects of the disclosure provide systems and methods to manage rights in an extended reality environment. A user may be represented by an avatar in the extended reality environment. The user may set configuration options such as privacy restrictions, rating controls, or invisibility so that the user can explore the extended reality environment while controlling how other users are able to interact with the user’s avatar.

[0016] A client-side digital rights management (DRM) component can control which elements of the extended reality environment are rendered for the user. As used herein, elements of an extended reality environment encompass anything that may be presented to a user, including objects, views, scenes, avatars, music, speech, and any suitable combination thereof. For example, adult content may not be rendered, even if available, based on the user’s rating controls. As another example, virtual content within the user’s avatar’s field-of-view may not be rendered if the user does not have access rights to the content.

[0017] A client-side controller can control the data that is sent to an extended reality server. For example, the server may send to the client descriptions of objects to be rendered for the user to see and the client device may render the view for the user. Thus, if the user’s avatar moves, the client device is able to immediately update the view without waiting for communication with a server over the network. Based on the user’s privacy restrictions, the controller may delay updating the server as the avatar moves. In various example embodiments, more or less additional data for objects outside of the avatar’s current field of view is provided to the client, allowing greater or lesser freedom of movement by the avatar between location updates being provided to the server.

[0018] The user’s avatar may be protected by DRM. Thus, other users will be able to see the user’s avatar only if granted permission to do so. The user may grant permission to a specific group of people (e.g., friends on a social network, colleagues registered with email addresses from the same domain, or fellow members of an organization (e.g., a guild or league) in the extended reality environment). The user may grant permission to be viewed only while the user is at a specified location (e.g., the avatar may be viewed while the user is at work but not while the user is at home), while the avatar is at a specified location (e.g., the avatar may be viewed while in a virtual bar for socializing but not while in virtual shops for shopping), or both.

[0019] By use of the systems and methods described herein, users are provided greater control over their experience in extended reality environments, protecting their privacy. Technological systems for extended reality are improved, as the resulting system is more useful to the user than were prior-art systems. [0020] FIG. 1 is a network diagram illustrating a network environment 100 suitable for implementing rights management in an extended reality environment, according to some example embodiments. The network environment 100 includes extended reality servers 110A and HOB, a database server 120, a digital rights management (DRM) license server 150, client devices 130A, 130B, and 130C, and a network 140. The extended reality servers 110A-110B may be referred to generically as an extended reality server 110.

[0021] The extended reality servers 110A-110B send data to the client devices 130A-130C to allow users of the client devices 130A-130C to experience extended reality. For example, data may be provided to the client devices 130A-130C that allows the client devices BOA- HOC to render a virtual reality environment. The extended reality servers 110A and HOB may also determine which data to provide to each of the client devices 130A-130C based on information received from that client device. For example, the client device BOA may provide a location of an avatar of a user within the extended reality environment and the extended reality server 110A may provide data regarding objects that can be viewed from the location of the avatar without providing data regarding other objects, thus reducing the consumption of network bandwidth.

[0022] The data received from one of the client devices 130A-130C may affect the data provided by the extended reality server 110A or 110B to another one of the client devices 130A-130C. For example, the location and orientation of a user’s avatar may be received from the client device BOA and, based on the received location and orientation, data to render the user’s avatar provided to the client device BOB. Thus, users in the extended reality environment are enabled to see each other’s avatars.

[0023] The database server 120 may store data used by the extended reality servers 110A and 110B. For example, data for the entire extended reality environment (e.g., a full-size model of the Earth) may be stored in the database server 120 while a smaller portion of the extended reality environment (e.g., only portions of the model in which one or more user avatars are currently present) is stored in memory of the extended reality server 110A.

[0024] The client devices 130A-130C receive data from the extended reality servers 110A- 110B and present the extended reality environment to a user on a display device (e.g., a headmounted display). The client devices 130A-130C also provide data regarding the user or the user’s avatar to the extended reality servers 110A and HOB. For example, motion data, location data, orientation data, or any suitable combination thereof may be provided. The client devices 130A-130C may determine which data to provide to the extended reality servers 110A- 110B based on user privacy settings. For example, the user may choose to provide location data less frequently so that when another user views the user’s avatar, the location of the avatar is only approximate, increasing the user’s privacy.

[0025] The DRM license server 150 maintains license information for digital rights. For example, a user may be permitted to select a particular movement (e.g., a dance) for their avatar only if they have a license for the movement. When a request to perform the movement is received from a client device 130, the extended reality server 110A or 110B sends a request to the DRM license server 150 that identifies the user and the requested movement. In response, the DRM license server 150 indicates whether the movement is licensed for the user. If the movement is licensed, the extended reality server 110A or 110B allows the user to perform the requested movement. Otherwise, the movement is not performed.

[0026] As another example, a user may be permitted to stream audio only if licensed. When the user’s avatar enters an area in which a DRM-protected audio segment is being played, the extended reality server 110A or HOB sends a request to the DRM license server 150 that identifies the user and the requested audio. In response, the DRM license server 150 indicates whether the audio is licensed for the user. If the audio is licensed, the extended reality server 110A or HOB streams the audio to the user’s device. Otherwise, the audio is not provided. Thus, a concert, album release, or speech may be made accessible only to licensed users, even if access of the avatar to the location of the event is not restricted.

[0027] The DRM license server 150 may also maintain license information for user-related content. For example, User A may be enabled to control which other users can see User A’s avatar. The client device 130A of User A communicates with the DRM license server 150 via the network 140 to indicate which other users are permitted to view User A’s avatar. When User A’s avatar would be rendered for viewing by User B, the extended reality server 110A or HOB communicates with the DRM license server 150 to determine if the User B has a license to view the avatar. If viewing the avatar is licensed, the extended reality server 110A or 110B renders the avatar. Otherwise, the avatar is not rendered.

[0028] Instead of checking the licenses on the extended reality server 110A or 110B, the DRM-controlled content may be provided to the client device 130 in an encrypted form regardless of whether the content is licensed. The client device 130 may request a decryption key from the DRM license server 150. If the client device 130 is licensed, the DRM license server 150 provides the decryption key. Otherwise, the DRM license server 150 does not provide the decryption key. As a result, an unlicensed user is not permitted to view the DRM- controlled content.

[0029] The extended reality servers 110A-110B, the database server 120, the DRM license server 150, and the client devices 130A-130C may each be implemented in a computer system, in whole or in part, as described below with respect to FIG. 8.

[0030] Any of the machines or devices shown in FIG. 1 may be implemented in a general- purpose computer modified (e.g., configured or programmed) by software to be a special- purpose computer to perform the functions described herein for that machine, database, or device. For example, a computer system able to implement any one or more of the methodologies described herein is discussed below with respect to FIG. 8. Any or all of the devices 110A-130C may include a database or be in communication with a database server that provides access to a database. As used herein, a “database” is a data storage resource and may store data structured as a text file, a table, a spreadsheet, a relational database (e.g., an object- relational database), a triple store, a hierarchical data store, a document-oriented NoSQL database, a file store, or any suitable combination thereof. The database may be an in-memory database. Moreover, any two or more of the machines, databases, or devices illustrated in FIG. 1 may be combined into a single machine, database, or device, and the functions described herein for any single machine, database, or device may be subdivided among multiple machines, databases, or devices.

[0031] The extended reality servers 110A-110B, the database server 120, and the client devices 130A-130C may be connected by the network 140. The network 140 may be any network that enables communication between or among machines, databases, and devices. Accordingly, the network 140 may be a wired network, a wireless network (e.g., a mobile or cellular network), or any suitable combination thereof. The network 140 may include one or more portions that constitute a private network, a public network (e.g., the Internet), or any suitable combination thereof.

[0032] Though two extended reality servers 110A-110B and three client devices 130A- 130C are shown in FIG. 1, more devices are contemplated. Thus, a client device may be in communication with any number of extended reality servers and an extended reality server may be in communication with any number of client devices, allowing for a global network of users in multiple extended reality environments. Accordingly, the systems and methods described herein enable a platform by which extended reality providers and consumers can interface.

[0033] FIG. 2 is a block diagram 200 of the extended reality server 110A, according to some example embodiments, suitable for implementing rights management in an extended reality environment. The extended reality server 110A is shown as including a three-dimensional (3D) rendering engine 210, an encoder 220, a control module 230, data storage 240, a network interface 250, and a gallery module 260, all configured to communicate with each other (e.g., via a bus, shared memory, or a switch). Any one or more of the modules described herein may be implemented using hardware (e.g., a processor of a machine). For example, any module described herein may be implemented by a processor configured to perform the operations described herein for that module. Moreover, any two or more of these modules may be combined into a single module, and the functions described herein for a single module may be subdivided among multiple modules. Furthermore, according to various example embodiments, modules described herein as being implemented within a single machine, database, or device may be distributed across multiple machines, databases, or devices.

[0034] The network interface 250 receives data sent to the extended reality server 110A and transmits data from the extended reality server 110A. For example, the network interface 250 may receive, from the encoder 220, extended reality data for display on a display device of one of the client devices 130A-130C. Communications sent and received by the network interface 250 may be intermediated by the network 140.

[0035] The control module 230 controls the extended reality environment. For example, a day/night cycle may be implemented that changes the ambient lighting level of the 3D environment rendered by the 3D rendering engine 210. As another example, non-player characters (NPC) in an extended reality game may move or perform actions in the extended reality environment. Based on data accessed from the data storage 240 and commands received from the control module 230, the 3D rendering engine 210 generates a description of a 3D environment. The description of the 3D environment is provided to the encoder 220, which converts the description into a network-friendly format (e.g., H.264 format, Graphics Language Transmission Format (GLTF) format, or any suitable combination thereof) and transmits the encoded data over the network 140 to the client devices 130A-130C.

[0036] Different client devices 130 may receive different data based on subscription level. For example, the client device 130A may be associated with a user account receiving a free or basic level service and based on the level of service, low-resolution data may be provided to the client device 130A. As another example, the client device 130B may be associated with a user account subscribed to a premium level of service and, based on the level of service, high- resolution data may be provided to the client device 130B. Similarly, premium content may not be provided to a non-premium subscriber. For example, a basic subscriber may not have permission to view a virtual concert. Accordingly, the basic subscriber would see an empty stage at the virtual concert venue. By contrast, the premium subscriber would receive the data objects that are rendered for display as the band members, musical gear, and so on.

[0037] The gallery module 260 creates a gallery of muted or anonymous participants. Participants in the gallery are able to see objects in the extended reality environment, but cannot affect the extended reality environment. For example, the avatars of the muted participants may not be rendered for other participants, voice chat originating from the muted participants may not be sent and played to other participants, votes taken in the extended reality environment may exclude the muted participants, or any suitable combination thereof. [0038] The extended reality server 110A may store scene data (i.e., data for the extended reality environment that is separate from user avatar data), avatar data, user preferences, digital rights management (DRM) keys, or any suitable combination thereof in the data storage 240.

[0039] FIG. 3 is a block diagram 300 of a client device 130, according to some example embodiments, suitable for implementing rights management in an extended reality environment. The client device 130 includes a client DRM and cloaker module 310, a controller 320, an event reporter 330, a user interface module 340, a traffic inspector 350, a uniform display layer 360, a data storage 370, a decoder 380, and a network interface 390.

[0040] The network interface 390 enables the client device 130 to connect to the network 140. Via the network 140, the client device 130 communicates with the extended reality servers 110A-110B, the DRM license server 150, or any suitable combination thereof.

[0041] The user interface module 340 enables a user to set configuration options such as privacy restrictions, rating controls, invisibility within the extended reality environment, or any suitable combination thereof. The configuration options are stored in data storage 370. The user interface module 340 may also receive inputs to control the user’s avatar in the extended reality environment. User inputs may be received by keyboard, game controller, headset, motion controllers (e.g., accelerometers), detection of user movement (e.g., via cameras watching the user), or any suitable combination thereof.

[0042] The user interface module 340 may also be updated with information regarding active and muted users. For example, the user interface module 340 may cause display of a number that indicates how many users are active, how many users are muted, or both.

[0043] The event reporter 330 collates information for the extended reality server 110A and, together with the controller 320, manages status and event information that is returned to the extended reality server 110A. Example events include interactions of the user’s avatar with the environment (e.g., a change in the avatar’s location or orientation in the extended reality environment), interactions of the user’s avatar with avatars of other users, status changes of the user or the user’s avatar (e.g., a change in visibility/invisibility status of the user’s avatar or a change in pose of the user’s avatar).

[0044] The client DRM and cloaker module 310 determines, based on the user configuration options, which object keys to provide to the DRM license server 150. For example, if the user configuration options provide that the user’s avatar should not be visible to any other users, the client DRM and cloaker module 310 may refrain from providing a decryption key for the avatar to the DRM license server 150. As another example, the client DRM and cloaker may send an indication to the DRM license server 150 that the decryption key for the user’s avatar is not to be provided to any other users.

[0045] In some example embodiments, the DRM license server 150 may override the instruction sent by the DRM and cloaker module 310. For example, in an area intended for children (e.g., a virtual park), invisibility requests by adults (e.g., users over 18 years of age) may be rejected, ensuring that user’s presence cannot be hidden in that area.

[0046] The decoder 380 receives data from the extended reality server 110A via the network interface 390. The received data may be encrypted or otherwise encoded. The decoder 380 decodes at least some of the data and provides the decoded data to the uniform display layer 360. The decoder 380 may receive decryption keys from the DRM license server 150. Data objects received from the extended reality server 110A for which the decoder 380 does not have decryption keys are not decrypted and are not provided to the uniform display layer 360. [0047] Data generated based on user actions is provided from the controller 320 to the traffic inspector 350. Based on the user configuration options, the traffic inspector 350 will choose whether or not to provide data regarding the user actions to the extended reality servers 110A- HOB. In some example embodiments, all data regarding the user actions is provided to the uniform display layer 360 even though some of the data regarding the user actions is not provided to the extended reality servers 110A-110B. In these embodiments, the user actions affects the display of the extended reality environment by the uniform display layer 360 without affecting the display for other users. For example, the user may change the location of an avatar, causing the uniform display layer 360 to render the extended reality environment from an updated viewpoint. Meanwhile, the traffic inspector 350 may choose not to send the updated location data to the extended reality servers 110A-110B. As a result, the extended reality servers 110A-110B are unable to inform other client devices 130 as to the new location of the user’s avatar. Thus, the uniform display layer 360 of another client device 130 that is rendering the user’s avatar in a particular location will not show the user’s avatar as having changed locations.

[0048] The uniform display layer 360 may incorporate changes in user status from private to public or vice versa. When a user (or the user’s avatar) is in a private status, the uniform display layer 360 may choose not to render the avatar, even if data for the avatar is received via the network interface 390 and decoded by the decoder 380. When a user (or the user’s avatar) is in a public status, the uniform display layer 360 renders the avatar if data is available. [0049] The traffic inspector 350 enforces privacy by managing all traffic flow from the client to the extended reality servers 110A-110B by inspection of the traffic ensuring that the underlying reporting adheres to the user’s privacy settings. In some example embodiments, the traffic inspector 350 is a hypertext transport protocol (HTTP) proxy. Alternatively, application programming interfaces (APIs) may be used for communication with the traffic inspector 350. [0050] FIG. 4 is a block diagram showing data flow 400 in rendering an extended reality environment subject to rights management, according to some example embodiments. The extended reality server 110 of FIG. 1 composes the background 405 and one or more objects (e.g., the objects 410A and 410B) to generate an extended reality environment.

[0051] The objects may be encrypted. The DRM license server 150 provides the keys 415 A, 415B, and 415C to the extended reality servers 110A-110B to allow the extended reality servers 110A-110B to decrypt the scene and the objects and perform the composition. For example, the object 410A may comprise a user avatar. Without the key 415B, the extended reality servers 110A-110B may be unable to access location data of the avatar, and thus be unable to compose an extended reality environment that comprises the avatar in its location.

[0052] The multiplexer 425 directs data comprising the background 405, the objects 410A- 410B, and the keys 415A-415C to the client devices 130 of FIG. 3. By use of the multiplexer 425, the data is selectively directed to the client devices 130 such that a first one of the client devices 130 receives different data from a second one of the client devices 130. For example, a large number of objects may exist in the extended reality environment but, based on a location of an avatar associated with a client device 130, only data for a visible subset of the objects are transmitted via the network 140 to the client device 130. As another example, the extended reality servers 110A-110B may determine, via communication with the DRM license server 150, that a particular client device 130 does not have a license to view an object and, in response to this determination, the multiplexer 425 may refrain from transmitting data for the object to the particular client device 130.

[0053] A demultiplexer 430 of the client device 130 receives the data sent from the extended reality servers 110A-110B. Keys 435A, 435B, and 435C for the received objects are received by the client device 130 from the DRM license server 150. The keys 435A-435C are used to decrypt the encrypted objects received from the extended reality servers 110A-110B. Any received objects for which keys are not received are not decrypted. The decrypted scene and objects, along with any unencrypted scene and objects, are provided to the display layer 445 for rendering on a display device of the client device 130. Based on the user’s rating control, some objects may not be rendered. For example, adult content may not be rendered for a user that has set a rating control to avoid such content.

[0054] Thus, the presentation of the background and objects by the display layer 445 may include, for each encrypted object: determining if a decryption key is available and, if the decryption key is available, decrypting and rendering the encrypted object. In some example embodiments, the background 405 is sent over the network 140 without being encrypted. Accordingly, the display layer 445 may present a mix of decrypted and unencrypted objects.

[0055] FIG. 5 is a flowchart illustrating operations of a method 500 suitable for implementing rights management in an extended reality environment, according to some example embodiments. The method 500 includes operations 510, 520, 530, and 540. By way of example and not limitation, the method 500 is described below as being performed by the client device 130A of FIG. 1 using the modules of FIG. 3.

[0056] In operation 510, the traffic inspector 350 accesses a configuration option that controls a privacy setting for a user in an extended reality environment. For example, the configuration option may indicate that the user’s avatar should not be visible to other users.

[0057] The traffic inspector 350, in operation 520, accesses data for the user. For example, data for the user’s avatar that indicates the avatar’s location and appearance may be accessed.

[0058] Based on the configuration option, in operation 530 the traffic inspector 350 modifies the data for the user. For example, the data for the avatar may be modified to include a flag that indicates that the avatar should not be rendered for other users. As another example, the data for the avatar may be modified by modifying the location of the avatar (e.g., by applying a rounding function to the location to report location at a granularity of 1, 10, or 100 meters instead of at a finer granularity). As a further example, the data for the avatar may be modified by modifying pose data of the avatar (e.g., by substituting a different pose for the avatar). As still another example, the frequency with which the data is sent to the server may be modified (e.g., decreased). For example, instead of sending updated location data for the avatar to a server every 100ms, the location data may be sent every 5s.

[0059] In addition to (or instead of) modifying data relating to the user’s avatar, data that relates to the user himself or herself may be modified. For example, eye tracking data for the user may be gathered and sent to the server. Based on the configuration option, eye tracking data such as pupil data for the user and eye focal point data for the user may be modified before sending the eye focal point data to the server. As another example, motion data (e.g., head motion, arm motion, hand motion, or any suitable combination thereof) of the user may be modified.

[0060] The modification of the data for the user may be based on the location of the user’s avatar in the extended reality environment. For example, the configuration option may indicate that the user’s avatar is to be visible while in a location where the user wishes to interact with other users (e.g., a city, a central hub, or a player-vs-player arena) but be invisible in locations where the user does not wish to interact with other users (e.g., a player-versus-environment area, a shopping area, or a home area).

[0061] In operation 540, the traffic inspector 350 sends the modified data to a server (e.g., the extended reality server 110A via the network 140). Thus, the data sent to the server is in compliance with the user’ s preferences as indicated by the configuration options. The modified data may allow the user to accomplish the user’s goals within the extended reality environment while protecting the user’s privacy. For example, sending avatar orientation or eye focal point information that is within 30 degrees of the avatar’ s actual orientation or the user’s actual focus point may cause the server to provide scene data that is sufficient for the user to view an item of interest without informing the server precisely which of several items being rendered is the item of interest to the user.

[0062] FIG. 6 is a flowchart illustrating operations of a method 600 suitable for implementing rights management in an extended reality environment, according to some example embodiments. The method 600 includes operations 610, 620, 630, and 640. By way of example and not limitation, the method 600 is described below as being performed by the extended reality server 110A of FIG. 1, using the modules of FIG. 2.

[0063] In operation 610, the extended reality server 110A receives, from a first client device (e.g., the client device 130A), avatar data and a privacy setting. For example, the avatar data may indicate the location and appearance of an avatar of a user of the client device 130A. The privacy setting may indicate a set of users that are allowed to view the avatar, a set of users that are not allowed to view the avatar, or both. The sets of users may be identified by unique identifiers (e.g., user names or ID numbers), by other criteria (e.g., location within the extended reality environment, type of account with the extended reality server 110A (e.g., free, paid, or premium)), or any suitable combination thereof.

[0064] The encoder 220 of the extended reality server 110A encrypts the avatar data in operation 620. The encrypted avatar data is provided along with unencrypted extended reality data to a plurality of client devices (e.g., the client devices 130B and 130C). The encoder 220 may prepare the data for transmission using GLTF or Universal Scene Description (USD) over

HTTP. [0065] In operation 630, based on the privacy setting, the extended reality server 110A provides a decryption key for the encrypted avatar data to a subset of the plurality of client devices. For example, the privacy setting may indicate that a user of the client device 130B is permitted to view the avatar and a user of the client device 130C is not. Accordingly, the decryption key is transmitted via the network 140 to the client device 130B and not transmitted to the client device 130C.

[0066] FIG. 7 is a flowchart illustrating operations of a method 600 suitable for implementing rights management in an extended reality environment, according to some example embodiments. The method 700 includes operations 710, 720, 730, and 740. By way of example and not limitation, the method 700 is described below as being performed by the extended reality server 110A of FIG. 1, using the modules of FIG. 2.

[0067] In operation 710, the extended reality server 110A receives, from a first client device (e.g., the client device 130A), avatar data and a privacy setting. For example, the avatar data may indicate the location and appearance of an avatar of a user of the client device 130A. The privacy setting may indicate a set of users that are allowed to view the avatar, a set of users that are not allowed to view the avatar, or both. The sets of users may be identified by unique identifiers (e.g., usernames or ID numbers), by other criteria (e.g., location within the extended reality environment, type of account with the extended reality server 110A (e.g., free, paid, or premium)), or any suitable combination thereof.

[0068] The gallery module 260 of the extended reality server 110A changes settings for inclusion in extended reality data according to the privacy setting in operation 720. For example, if the privacy setting indicates that the avatar should be muted and invisible to other users, the inclusion settings are changed to indicate that the avatar is in a gallery of muted users and cannot be seen or otherwise affect the extended reality environment. As another example, if the privacy setting indicates that a currently muted avatar is no longer muted, the avatar is removed from the gallery of muted users, becomes visible to other users, and is enabled to affect the extended reality environment. Settings that may be affected by the privacy setting include the ability to share, vote, speak, be seen, or any suitable combination thereof.

[0069] In operation 730, based on inclusion settings for a plurality of client devices, the extended reality server 110A selects avatar data to provide to a second client device. For example, the extended reality environment may include one hundred users, twenty of which have provided, in repetitions of operation 710, privacy settings indicating their avatars are muted. As a result, in operation 730, the extended reality server 110 selects the avatar data for the eighty non-muted users and excludes the avatar data for the twenty muted users.

[0070] The selected avatar data is provided, in operation 740, data to a plurality of client devices (e.g., the client devices 130B and 130C). For example, the encoder 220 may prepare the data for transmission using GLTF or Universal Scene Description (USD) over HTTP.

EXAMPLE MACHINE ARCHITECTURE AND MACHINE-READABLE MEDIUM [0071] FIG. 8 is a block diagram of a machine in the example form of a computer system 800 within which instructions 824 may be executed for causing the machine to perform any one or more of the methodologies discussed herein. In alternative embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a personal digital assistant (PDA), a cellular telephone, a web appliance, a network router, switch, or bridge, or any machine capable of executing instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

[0072] The example computer system 800 includes a processor 802 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 804, and a static memory 806, which communicate with each other via a bus 808. The computer system 800 may further include a video display unit 810 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 800 also includes an alphanumeric input device 812 (e.g., a keyboard or a touch-sensitive display screen), a user interface (UI) navigation (or cursor control) device 814 (e.g., a mouse), a storage unit 816, a signal generation device 818 (e.g., a speaker), and a network interface device 820.

MACHINE-READABLE MEDIUM

[0073] The storage unit 816 includes a machine-readable medium 822 on which is stored one or more sets of data structures and instructions 824 (e.g., software) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 824 may also reside, completely or at least partially, within the main memory 804 and/or within the processor 802 during execution thereof by the computer system 800, with the main memory 804 and the processor 802 also constituting machine-readable media 822.

[0074] While the machine-readable medium 822 is shown in an example embodiment to be a single medium, the term “machine-readable medium” may include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more instructions 824 or data structures. The term “machine- readable medium” shall also be taken to include any tangible medium that is capable of storing, encoding, or carrying instructions 824 for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such instructions 824. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, and optical and magnetic media. Specific examples of machine-readable media 822 include non-volatile memory, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and compact disc read-only memory (CD-ROM) and digital versatile disc read-only memory (DVD-ROM) disks. A machine-readable medium is not a transmission medium.

TRANSMISSION MEDIUM

[0075] The instructions 824 may further be transmitted or received over a communications network 826 using a transmission medium. The instructions 824 may be transmitted using the network interface device 820 and any one of a number of well-known transfer protocols (e.g., hypertext transport protocol (HTTP)). Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, mobile telephone networks, plain old telephone (POTS) networks, and wireless data networks (e.g., WiFi and WiMax networks). The term “transmission medium” shall be taken to include any intangible medium that is capable of storing, encoding, or carrying instructions 824 for execution by the machine, and includes digital or analog communications signals or other intangible media to facilitate communication of such software.

[0076] Terms used herein and especially in the appended claims (e.g., bodies of the appended claims) are generally intended as “open” terms (e.g., the term “including” may be interpreted as “including, but not limited to,” the term “having” may be interpreted as “having at least,” the term “includes” may be interpreted as “includes, but is not limited to,” etc.).

[0077] Additionally, if a specific number of an introduced claim recitation is intended, such an intent will be explicitly recited in the claim, and in the absence of such recitation no such intent is present. For example, as an aid to understanding, the following appended claims may contain usage of the introductory phrases “at least one” and “one or more” to introduce claim recitations. However, the use of such phrases may not be construed to imply that the introduction of a claim recitation by the indefinite articles “a” or “an” limits any particular claim containing such an introduced claim recitation to embodiments containing only one such recitation, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an” (e.g., “a” and/or “an” may be interpreted to mean “at least one” or “one or more”); the same holds true for the use of definite articles used to introduce claim recitations.

[0078] In addition, even if a specific number of an introduced claim recitation is explicitly recited, those skilled in the art will recognize that such a recitation may be interpreted to mean at least the recited number (e.g., the bare recitation of “two recitations,” without other modifiers, means at least two recitations, or two or more recitations). Further, in those instances where a convention analogous to “at least one of A, B, and C, etc.” or “one or more of A, B, and C, etc.” is used, in general such a construction is intended to include A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B, and C together, etc. For example, the use of the term “and/or” is intended to be construed in this manner.

[0079] Further, any disjunctive word or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, may be understood to contemplate the possibilities of including one of the terms, some of the terms, or all of the terms. For example, the phrase “A or B” may be understood to include the possibilities of “A” or “B” or “A and B.” [0080] Embodiments described herein may be implemented using computer-readable media for carrying or having stored thereon computer-executable instructions or data structures. Such computer-readable media may be any available media that may be accessed by a general- purpose or special-purpose computer. By way of example, and not limitation, such computer- readable media may include non-transitory computer-readable storage media including Random Access Memory (RAM), Read-Only Memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Compact Disc Read-Only Memory (CD- ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, flash memory devices (e.g., solid-state memory devices), or any other storage medium which may be used to carry or store desired program code in the form of computer-executable instructions or data structures and which may be accessed by a general -purpose or specialpurpose computer. Combinations of the above may also be included within the scope of computer-readable media.

[0081] Computer-executable instructions may include, for example, instructions and data which cause a general -purpose computer, special-purpose computer, or special-purpose processing device (e.g., one or more processors) to perform a certain function or group of functions. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims. [0082] As used herein, the terms “module” or “component” may refer to specific hardware implementations configured to perform the operations of the module or component and/or software objects or software routines that may be stored on and/or executed by general -purpose hardware (e.g., computer-readable media, processing devices, etc.) of the computing system. In some embodiments, the different components, modules, engines, and services described herein may be implemented as objects or processes that execute on the computing system (e.g., as separate threads). While some of the system and methods described herein are generally described as being implemented in software (stored on and/or executed by general-purpose hardware), specific hardware implementations or a combination of software and specific hardware implementations are also possible and contemplated.

[0083] All examples and conditional language recited herein are intended as pedagogical objects to aid the reader in understanding the inventive subject matter and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Although embodiments of the present disclosure have been described in detail, it may be understood that various changes, substitutions, and alterations may be made thereto without departing from the scope of the present disclosure.

[0084] Example 1 is a system comprising: a memory that stores instructions; and one or more processors configured by the instructions to perform operations comprising: accessing a configuration option that controls a privacy setting for a user represented by an avatar in an extended reality environment; accessing data for the user that indicates a location of the avatar in the extended reality environment; based on the configuration option, modifying the location by applying a rounding function to the location; and sending the modified location to the computing device. [0085] In Example 2, the subject matter of Example 1, wherein the operations further comprise: based on the configuration option, sending an indication to a computing device that the avatar for the user should not be visible to other users.

[0086] In Example 3, the subject matter of Examples 1-2, wherein the operations further comprise: based on a second configuration option, modifying a frequency at which data is sent to the computing device.

[0087] In Example 4, the subject matter of Examples 1-3, wherein the sending of the indication to the computing device is further based on a location of the avatar in the extended reality environment.

[0088] In Example 5, the subject matter of Examples 1-4, wherein the operations further comprise: based on a second configuration option, modifying pose data of the avatar by substituting a different pose for the avatar before sending the pose data to the computing device. [0089] In Example 6, the subject matter of Examples 1-5, wherein the operations further comprise: based on a second configuration option, modifying pupil data for the user before sending the pupil data to the computing device.

[0090] In Example 7, the subject matter of Examples 1-6, wherein the operations further comprise: based on a second configuration option, modifying eye focal point data for the user before sending the eye focal point data to the computing device.

[0091] In Example 8, the subject matter of Examples 1-7, wherein the operations further comprise: based on a second configuration option, modifying motion data for the user before sending the motion data to the computing device.

[0092] In Example 9, the subject matter of Examples 1-8, wherein the operations further comprise: based on a determination that a second avatar is in a muted gallery, refraining from rendering the second avatar. [0093] Example 10 is a non-transitory machine-readable medium that stores instructions that, when executed by one or more processors, cause the one or more processors to perform operations comprising: accessing a configuration option that controls a privacy setting for a user represented by an avatar in an extended reality environment; accessing data for the user that indicates a location of the avatar in the extended reality environment; modifying the location by applying a rounding function to the location; and based on the configuration option, sending the modified location to the computing device.

[0094] In Example 11, the subject matter of Example 10, wherein the operations further comprise: based on the configuration option, sending an indication to a computing device that the avatar for the user should not be visible to other users.

[0095] In Example 12, the subject matter of Examples 10-11, wherein the operations further comprise: based on a second configuration option, modifying a frequency at which data is sent to the computing device.

[0096] In Example 13, the subject matter of Examples 10-12, wherein the sending of the indication to the computing device is further based on a location of the avatar in the extended reality environment.

[0097] In Example 14, the subject matter of Examples 10-13, wherein the operations further comprise: based on a second configuration option, modifying pose data of the avatar by substituting a different pose for the avatar before sending the pose data to the computing device. [0098] In Example 15, the subject matter of Examples 10-14, wherein the operations further comprise: based on a second configuration option, modifying pupil data for the user before sending the pupil data to the computing device. [0099] In Example 16, the subj ect matter of Examples 10-15, wherein the operations further comprise: based on a second configuration option, modifying eye focal point data for the user before sending the eye focal point data to the computing device.

[0100] In Example 17, the subj ect matter of Examples 10-16, wherein the operations further comprise: based on a second configuration option, modifying motion data for the user before sending the motion data to the computing device.

[0101] Example 18 is a method comprising: accessing, by one or more processors of a client device, a configuration option that controls a privacy setting for a user represented by an avatar in an extended reality environment; accessing data for the user that indicates a location of the avatar in the extended reality environment; modifying the location by applying a rounding function to the location; and based on the configuration option, sending the modified location to the computing device.

[0102] In Example 19, the subject matter of Example 18 includes, based on the configuration option, sending an indication to a computing device that the avatar for the user should not be visible to other users.

[0103] In Example 20, the subject matter of Examples 18-19 includes, based on a second configuration option, modifying a frequency at which data is sent to the computing device.

[0104] Example 21 is at least one machine-readable medium including instructions that, when executed by processing circuitry, cause the processing circuitry to perform operations to implement of any of Examples 1-20.

[0105] Example 22 is an apparatus comprising means to implement any of Examples 1-20.

[0106] Example 23 is a system to implement any of Examples 1-20.

[0107] Example 24 is a method to implement any of Examples 1-20.