Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A SYSTEM AND METHOD FOR GENERATING A 3D AVATAR
Document Type and Number:
WIPO Patent Application WO/2022/159038
Kind Code:
A1
Abstract:
The present invention provides a system and method for generating a 3D avatar, substantially in real-time. The system and method can be used for a variety of applications, for example, engagement sessions at pre-defined venues, virtual apparel/wearable device fittings, and the like. It should be noted that the pre-defined venues can be imaginary environments, digitally rendered real environments or actual environments. In some aspects, the 3D avatars are able to provide a representation of users in a particular environment.

Inventors:
BEH EE (SG)
LIM KEAN (SG)
Application Number:
PCT/SG2022/050034
Publication Date:
July 28, 2022
Filing Date:
January 25, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BUZZ ARVR PTE LTD (SG)
International Classes:
G06T13/40; G06Q30/06; G06T19/00
Foreign References:
US20140033044A12014-01-30
US20130080287A12013-03-28
US20150220854A12015-08-06
US20150123967A12015-05-07
KR20130032620A2013-04-02
Attorney, Agent or Firm:
TAN, Wen, Min, Desmond (SG)
Download PDF:
Claims:
THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:

1 . A system for generating a 3D avatar, the system including one or more data processors configured to: capture, at a device, images of a user and a surrounding environment of the user; transmit, from the device, data of the images; receive, at a central server, the data; process, at the central server, the data; initiate, at the device, a background on which the 3D avatar is overlaid on; display, at the device, the 3D avatar and the background; and control, at the device, the 3D avatar to enable interaction with the background, wherein the device is selected from either a user device or a display device.

2. The system of claim 1 , the one or more data processors further configured to: transmit, from the device, user credentials to access a third party portal; control, at the device, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal; and record, at the device, the 3D avatar interacting with the background.

3. The system of either claim 1 or 2, wherein the images comprise frontal and side views of the user.

4. The system of claim 3, wherein physical attributes, clothing and accessories of the user are obtained from the images.

5. The system of claim 4, wherein processing of the data at the central server enables generation of the 3D avatar of the user using the physical attributes, clothing and accessories of the user.

6. The system of claim 5, wherein the processing of the data includes use of machine learning.

7. The system of any of claims 1 to 6, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.

8. The system of any of claims 1 to 7, wherein interaction enhances the user’s perception of immersion in the background.

9. A data processor implemented method for generating a 3D avatar, the method comprising: capturing, at a device, images of a user and a surrounding environment of the user; transmitting, from the device, data of the images; receiving, at a central server, the data; processing, at the central server, the data; initiating, at the device, a background on which the 3D avatar is overlaid on; displaying, at the device, the 3D avatar and the background; and controlling, at the device, the 3D avatar to enable interaction with the background, wherein the device is selected from either a user device or a display device.

10. The method of claim 9, further comprising: transmitting, from the device, user credentials to access a third party portal; controlling, at the device, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal; and recording, at the device, the 3D avatar interacting with the background.

11. The method of either claim 9 or 10, wherein the images comprise frontal and side views of the user.

12. The method of claim 11 , wherein physical attributes, clothing and accessories of the user are obtained from the images.

13. The method of claim 12, wherein processing of the data at the central server enables generation of the 3D avatar of the user using the physical attributes, clothing and accessories of the user.

14. The method of claim 13, wherein the processing of the data includes use of machine learning.

15. The method of any of claims 9 to 14, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.

16. The method of any of claims 9 to 15, wherein interaction enhances the user’s perception of immersion in the background.

17. A user device configured for generating a 3D avatar, the user device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.

18. The user device of claim 17, the one or more data processors further configured to:

18 transmit, user credentials to access a third party portal; control, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal; and record, the 3D avatar interacting with the background.

19. The user device of either claim 17 or 18, wherein the images comprise frontal and side views of the user.

20. The user device of claim 19, wherein physical attributes, clothing and accessories of the user are obtained from the images.

21. The user device of any of claims 17 to 20, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.

22. The user device of any of claims 17 to 21 , wherein interaction enhances the user’s perception of immersion in the background.

23. A display device configured for generating a 3D avatar, the display device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.

24. The display device of claim 23, the one or more data processors further configured to: transmit, user credentials to access a third party portal; control, at least one selection resulting from a purchase history of the user, the

19 purchase history being at the third party portal; and record, the 3D avatar interacting with the background.

25. The display device of either claim 23 or 24, wherein the images comprise frontal and side views of the user.

26. The display device of claim 25, wherein physical attributes, clothing and accessories of the user are obtained from the images.

27. The display device of any of claims 23 to 26, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, and a hybrid real-and-virtual environment.

28. The display device of any of claims 23 to 27, wherein interaction enhances the user’s perception of immersion in the background.

29. A central server generating a 3D avatar, the central server including one or more data processors configured to: receive, from a device, data of images of a user and a surrounding environment of the user; process, the data; and transmit, to the device, processed data to enable display of the generated 3D avatar to be overlaid on a background, wherein the device is selected from either a user device or a display device.

30. The central server of claim 29, wherein the images comprise frontal and side views of the user.

31. The central server of claim 30, wherein physical attributes, clothing and accessories of the user are obtained from the images.

20

32. The central server of claim 31 , wherein processing of the data enables generation of the 3D avatar of the user using the physical attributes, clothing and accessories of the user. 33. The central server of claim 32, wherein the processing of the data includes use of machine learning.

34. The central server of any of claims 29 to 33, wherein the background is selected from a group consisting of: an actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, and a simulated world.

35. The central server of any of claims 29 to 34, the central server including one or more data processors further configured to: receive, from the device, user credentials to access a third party portal; and transmit, to the device, at least one selection resulting from a purchase history of the user, the purchase history being at the third party portal.

21

Description:
A SYSTEM AND METHOD FOR GENERATING A 3D AVATAR

Field of the Invention

The present invention relates to a system, and method for generating a 3D avatar.

Background

The increasing gamification of user interfaces across digital platforms has led to a prevalence of avatar-based interactions for users on digital platforms, and correspondingly, a widespread acceptance of avatars amongst the users. Currently, many platforms have relied on avatars substantially based on pre-defined template forms, such as Facebook Avatar, Bitmoji, Apple Memoji, and so forth.

It should be appreciated that currently, the avatars generated are only in 2D, and are not used in an environment which enables the avatars to be used in an augmented reality manner. This is due to data processing constraints which prevents the generation of enhanced avatars. In addition, as mentioned previously, the use of predefined template forms limits the extent by which the avatars can be customised, and the extent of interaction between avatars and real-life aspects/features is also limited.

There is also an increasing emphasis on digital environments, for example, a metaverse, a game universe, a simulated world, and the like, whereby avatars are typically used when navigating the digital environments.

Moreover, the increasing acceptance of non-fungiable tokens (NFTs) is leading to a practice of of valuing avatars using NFTs. This is leading to substantial creative effort being expended to create avatars with appeal to third parties, and consequently providing a way to derive financial gain from the creation of avatars, akin to the creation of an avatar creative industry. Summary

In a first aspect, there is provided a system for generating a 3D avatar, the system including one or more data processors configured to: capture, at a device, images of a user and a surrounding environment of the user; transmit, from the device, data of the images; receive, at a central server, the data; process, at the central server, the data; initiate, at the device, a background on which the 3D avatar is overlaid on; display, at the device, the 3D avatar and the background; and control, at the device, the 3D avatar to enable interaction with the background.

It is preferable that the device is selected from either a user device or a display device.

In a second aspect, there is provided a data processor implemented method for generating a 3D avatar, the method comprising: capturing, at a device, images of a user and a surrounding environment of the user; transmitting, from the device, data of the images; receiving, at a central server, the data; processing, at the central server, the data; initiating, at the device, a background on which the 3D avatar is overlaid on; displaying, at the device, the 3D avatar and the background; and controlling, at the device, the 3D avatar to enable interaction with the background.

It is preferable that the device is selected from either a user device or a display device.

In a third aspect, there is provided a user device configured for generating a 3D avatar, the user device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.

There is also provided a display device configured for generating a 3D avatar, the display device including one or more data processors configured to: capture, images of a user and a surrounding environment of the user; transmit, data of the images; initiate, a background on which the 3D avatar is overlaid on; display, the 3D avatar and the background; and control, the 3D avatar to enable interaction with the background.

Finally, there is provided a central server generating a 3D avatar, the central server including one or more data processors configured to: receive, from a device, data of images of a user and a surrounding environment of the user; process, the data; and transmit, to the device, processed data to enable display of the generated 3D avatar to be overlaid on a background.

It is preferable that the device is selected from either a user device or a display device.

It will be appreciated that the broad forms of the invention and their respective features can be used in conjunction, interchangeably and/or independently, and reference to separate broad forms is not intended to be limiting.

Brief Description of the Drawings A non-limiting example of the present invention will now be described with reference to the accompanying drawings, in which:

FIG 1 is a flow chart of an example of a method for generating a 3D avatar;

FIG 2 is a schematic diagram of an example of a system for generating a 3D avatar;

FIG 3 is a schematic diagram showing components of an example user device of the system shown in FIG 2;

FIG 4 is a schematic diagram showing components of an example mass display device of the system shown in FIG 2;

FIG 5 is a schematic diagram showing components of an example central server shown in FIG 2;

FIGs 6A to 6B is an example of a 3D avatar generated using the method of FIG 1 ;

FIG 7 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a first example background;

FIG 8 shows an example of a 3D avatar generated using the method of FIG 1 when placed in a second example background; and

FIG 9 shows a flow chart of an example of tasks carried out by a user device/display device during the method of FIG 1 .

Detailed Description

The present invention provides a system and method for generating a 3D avatar, substantially in real-time. The system and method can be used for a variety of applications, for example, engagement sessions at pre-defined venues, virtual apparel/wearable device fittings, and the like. It should be noted that the pre-defined venues can be imaginary environments, digitally rendered real environments or actual environments. The 3D avatars are modelled substantially on physical attributes and wearables of users, such as, for example, facial features, physique, clothing, accessories and so forth. In some aspects, the 3D avatars are able to provide a representation of users in a particular environment. For the purpose of illustration, it is assumed that the method can be performed at least in part amongst one or more data processing devices such as, for example, a mobile phone, a display device, a central server, or the like. Typically, the central server will be configured to carry out a majority of the processing tasks, with the mobile phone and the display device being configured to display outputs from the central server. In some instances,

An example of a broad overview of a method 100 for generating a 3D avatar will now be described with reference to FIG 1 .

At step 105, at least one image of a user and a surrounding environment of the user is captured. The more images that are captured, the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user. For example, the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth. In addition, other than physical attributes of the users, the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user. Typically, the at least one image is captured with a user device like a camera on a mobile phone, or a camera coupled to a display device.

At step 110, data of the at least one image of the user and surrounding environment is transmitted to a central server. In some embodiments, user credentials to access a third party portal is also provided to the central server from the user device. It should be appreciated that the user credentials are typically usable in the aforementioned manner with consent of the user. It should be appreciated that the central server can comprise more than one data processing device. An example embodiment of the central server will be provided in a subsequent paragraph. At step 115, the data of the at least one image of the user and surrounding environment is processed at the central server to generate a 3D avatar. The physical attributes, the clothing and/or accessories of the user that are obtained from the data are used to generate the 3D avatar. It should be noted that the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods. Substantial processing processes are carried out at the central server which broadly comprises determining the physical attributes, the clothing and/or accessories of the user from the at least one image, and uses that information to generate the 3D avatar with at least some likeness to the user while wearing similar clothing and/or accessories. It should be appreciated that the substantial processing processes rely on both hardware and software of the central server to ensure that the 3D avatar is generated within a short period of time, typically less than five seconds. Most of the data processing to generate the 3D avatar is carried out at the central server, and not at devices configured for showing the 3D avatar. For example, the substantial processing processes can include machine learning of all the images processed at the central server, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing. The machine learning can also enable enhanced likeness of the user to be generated in avatar form. Furthermore, the machine learning is also able to aid in shortening the time to generate the 3D avatar.

In some embodiments, the central server is able to use the user credentials of the user to obtain a purchase history from third party platforms (eg. e-commerce platforms) at step 117, whereby the purchase history can be from a pre-defined category of goods/services like clothing and/or accessories. The purchase history can be desirable as it can be employed in a product selection to enhance, for example, purchase intent, sales, user engagement, and so forth. This will be evident in a later portion of the description. At step 120, a background on which the 3D avatar is overlaid on is selected. For example, the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth.

At step 125, the 3D avatar is able to interact with the selected background on the device configured to show the 3D avatar. It should be appreciated that the 3D avatar interacts with the selected background in accordance with actions/gestures carried out by the user. This enhances the user’s perception of immersive-ness in the selected background. For example, the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories. In addition, the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.

Finally, at step 130, the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback. The recording can be stored at the user device or at the central server.

It should be appreciated that the method 100 enables benefits for both users and providers of the method 100. In some embodiments, the providers can be entities that provide a good and/or service to the users.

In relation to the user, the method 100 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories. The level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds. In addition, the users can also choose to monetize the 3D avatars that are generated, for example, as a digital asset with ownership rights being transferrable via NFT/cryptocurrency transactions.

In relation to the provider, the method 100 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the method 100 to be carried out at any location with connectivity to a data network, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.

An example of a system 200 for generating a 3D avatar will now be described with reference to FIG 2.

In this example, the system 200 includes one or more user devices 220, one or more display devices 230, a communications network 250, a third party platform 280 (eg. an e-commerce platform), and a central server 260. The one or more user devices 220 and the one or more display devices 230 communicate with the central server 260 via the communications network 250. The communications network 250 can be of any appropriate form, such as the Internet and/or a number of local area networks (LANs). Further details of respective components of the system 200 will be provided in a following portion of the description. It will be appreciated that the configuration shown in FIG 2 is not limiting and for the purpose of illustration only.

User Device 220

The user device 220 of any of the examples herein may be a handheld computer device such as a smart phone with a capability to download and operate mobile applications, and be connectable to the communications network 250. The user device 220 can also be a VR headset. An exemplary embodiment of the user device 220 is shown in FIG 3. As shown, the user device 220 includes the following components in electronic communication via a bus 311 : 1. a display 302;

2. non-volatile memory 303;

3. random access memory ("RAM") 304;

4. data processor(s) 301 ;

5. a transceiver component 305 that includes a transceiver(s);

6. an image capture module 310; and

7. input controls 307.

In some embodiments, an app 309 stored in the non-volatile memory 303, is required to enable the user device 220 to operate in a desired manner. For example, the app 309 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar. In some instances, the app 309 can be a web browser.

Although the components depicted in FIG 3 represent physical components, FIG 3 is not intended to be a hardware diagram; thus many of the components depicted in FIG 3 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 3.

Display Device 230

The display device 230 of any of the examples herein may be a television with a capability to download and operate mobile applications, and be connectable to the communications network 250. An exemplary embodiment of the display device 230 is shown in FIG 4. As shown, the display device 230 includes the following components in electronic communication via a bus 411 :

1. a display 402; 2. non-volatile memory 403;

3. random access memory ("RAM") 404;

4. data processor(s) 401 ;

5. a transceiver component 405 that includes a transceiver(s);

6. an image capture module 410; and

7. input controls 407.

In some embodiments, an app 409 stored in the non-volatile memory 403, is required to enable the display device 230 to operate in a desired manner. For example, the app 409 can provide a user interface for generating a 3D avatar, and subsequently enabling user interaction with the generated 3D avatar. In some instances, the app 409 can be a web browser. In some instances, the user is able to control the display device 230 using another device wirelessly communicating with the display device 230, for example, the user’s mobile phone. The user’s mobile phone may be running the app 409 to provide access to an interface with the display device 230, or a web browser on the mobile phone may provide access to an interface with the display device 230.

Although the components depicted in FIG 4 represent physical components, FIG 4 is not intended to be a hardware diagram; thus many of the components depicted in FIG 4 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 4.

Central Server 260

The central server 260 is a hardware and software suite comprised of preprogrammed logic, algorithms and other means of processing information coming in, in order to send out information which is useful to the objective of the system 200 in which the central server 260 resides. For the sake of illustration, hardware which can be used by the central server 260 will be described briefly herein.

The central server 260 can broadly comprise a database which stores pertinent information, and processes information packets from the user devices 220 and the display devices 230. In some embodiments, the central server 260 can be operated from a commercial hosted service such as Amazon Web Services (TM).

In one possible embodiment, the central server 260 can be represented in a form as shown in FIG 4.

The central server 260 is in communication with a communications network 250, as shown in FIG 4. The central server 260 is able to communicate with the user devices 220, the display devices 230, and/or other processing devices, as required, over the communications network 250. In some instances, the user devices 220, the display devices 230 communicate via a direct communication channel (LAN or WIFI) with the central server 260.

The components of the central server 260 can be configured in a variety of ways. The components can be implemented entirely by software to be executed on standard computer server hardware, which may comprise one hardware unit or different computer hardware units distributed over various locations, some of which may require the communications network 250 for communication.

In the example shown in FIG 4, the central server 260 is a commercially available computer system based on a 32 bit or a 64 bit Intel architecture, and the processes and/or methods executed or performed by the central server 260 are implemented in the form of programming instructions of one or more software components or modules 502 stored on non-volatile computer-readable storage 503 associated with the central server 260. The device 400 includes at least one or more of the following standard, commercially available, computer components, all interconnected by a bus 505:

1 . random access memory (RAM) 506; and

2. at least one central processing unit (CPU) 507.

Although the components depicted in FIG 5 represent physical components, FIG 5 is not intended to be a hardware diagram; thus many of the components depicted in FIG 5 may be realized by common constructs or distributed among additional physical components. Moreover, it is contemplated that other existing and yet-to-be developed physical components and architectures may be utilized to implement the functional components described with reference to FIG 5.

It should be appreciated that the system 200 enables benefits for both users and providers of the system 200, when the system 200 is used to carry out the method 100. In some embodiments, the providers can be entities that provide a good and/or service to the users.

In relation to the user, the system 200 provides a level of engagement/fun which maintains their attention level, and can provide virtual visualisation of clothes/accessories. The level of engagement/fun is enhanced as the 3D avatar is generated with minimal time lag, typically less than five seconds.

In relation to the provider, the system 200 provides a channel to maintain engagement with users, and provides a virtual storefront for the goods and/or services being offered to the users. Furthermore, given that any on-site investment in hardware is minimal for the system 200, the provider also does not need to make a large financial investment to enable the carrying out of the method 100.

Referring to FIGs 6A and 6B, there are shown examples of what a user sees on the user device 220 or the display device 230. A main portion 600 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 610 shows the user a short time lag ago to coincide the user’s action with the 3D avatar shown in the main portion 600. It should be noted that FIGs 6A and 6B show a “no-background” situation.

Referring to FIG 7, there is shown another example of what a user sees on the user device 220 or the display device 230. A main portion 700 shows the 3D avatar dressed in similar clothing as the user generated by the method 100 and/or the system 200, while a sub-portion 710 shows the user interface for interacting with the 3D avatar. In this example, the sub-portion 710 shows a user interface for a user to change attire for the 3D avatar. Main menu 715 shows various types of clothing/accessories that can be changed on the 3D avatar while sub-menu 720 shows various options available when an item from the main menu 715 is selected. It should be noted that the 3D avatar in the main portion 700 moves around in sync with movements of the user while the user is using the main menu 715 and the sub-menu 720. FIG 7 shows a virtual background.

Referring to FIG 8, there is shown another example of what a user sees on the display device 230. A main portion 800 shows the 3D avatar generated by the method 100 and/or the system 200. It should be noted that FIG 8 shows an actual background that can be the location where the user is located. In addition, FIG 8 shows an instance when the user uses a mobile phone 810 to access an interface to control the display device 230.

Further details will now be provided for various aspects of the method 100 and the system 200.

Referring to FIG 9, there is shown an example of a method 900 for generating a 3D avatar, particularly in relation to a process at the user device 220 or display device 230. At step 905, at least one image of a user and a surrounding environment of the user is captured by the user device 220 or display device 230. The more images that are captured, the more physical attributes of the users that can be determined for use when generating a 3D avatar for the user. It is desirable for images containing frontal and side views of the user to be captured to aid in improving a likeness of the 3D avatar to the user. For example, the physical attributes include, facial muscles, facial points, eyes, nose, mouth, eyebrow, facial jawline, body frame and so forth. In addition, other than physical attributes of the users, the clothing and/or accessories being worn by the users can also be determined from the captured images so that the 3D avatar being generated appears outfitted with similar clothing and/or accessories as the user. Typically, the at least one image is captured with a user device 220 like a camera on a mobile phone, or a camera coupled to a display device 230.

At step 910, data of the at least one image of the user and surrounding environment is transmitted to the central server 260. For example, the central server 260 can carry out substantial processing of the at least one image of the user using machine learning, such that the 3D avatar can be generated in a predictive manner based on past images that have been processed at the central server for the user, for example, whenever there are insufficient images of the user in a particular clothing. The machine learning can also enable enhanced likeness of the user to be generated in avatar form. Furthermore, the machine learning is also able to aid in shortening the time to generate the 3D avatar. In some embodiments, user credentials to a third party portal is also provided to the central server 260 from the user device 220. It should be appreciated that the user credentials are typically usable in the aforementioned manner with consent of the user.

At step 915, a background on which the 3D avatar is overlaid on is selected at the user device 220 or display device 230. For example, the background can be the actual environment the user is in, any virtual environment, a hybrid real-and-virtual environment, a metaverse, a game universe, a simulated world and so forth. At step 920, the generated 3D avatar is received from the central server 260 at the user device 220 or display device 230. It should be noted that the 3D avatar is typically a representation of the user which causes amusement and/or entertainment and/or virtual sampling of goods. Most of the data processing to generate the 3D avatar is carried out at the central server 260, and not at devices configured for showing the 3D avatar.

At step 925, the 3D avatar is able to interact with the selected background on the user device 220 or display device 230. It should be appreciated that the 3D avatar interacts with the selected background in accordance with actions carried out by the user. This enhances the user’s perception of immersive-ness in the selected background. For example, the user is able to be clothed/accessorized virtually in relation to the user’s 3D avatar, and the user may correspondingly make purchase decisions based on the virtual trying of clothes/accessories. In addition, the purchase history of the user may be deployed in a product selection such as, for example, to display related past purchases, similar designs/prints to their 3D avatar appearance to enhance for example, purchase intent, sales, user engagement, and so forth. Therefore, behavioural data of the user can also be shown.

Finally, at step 930, the interaction of the 3D avatar in the selected background is recorded for storage and/or future playback.

Throughout this specification and claims which follow, unless the context requires otherwise, the word “comprise”, and variations such as “comprises” or “comprising”, will be understood to imply the inclusion of a stated integer or group of integers or steps but not the exclusion of any other integer or group of integers.

Persons skilled in the art will appreciate that numerous variations and modifications will become apparent. All such variations and modifications which become apparent to persons skilled in the art, should be considered to fall within the spirit and scope that the invention broadly appearing before described.