Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND SYSTEM FOR DISPLAYING THREE-DIMENSIONAL VIRTUAL APPAREL ON THREE-DIMENSIONAL AVATAR FOR REAL-TIME FITTING
Document Type and Number:
WIPO Patent Application WO/2024/033943
Kind Code:
A1
Abstract:
The present disclosure generally relates to a method and a system for displaying a three- dimensional apparel on a three-dimensional avatar of a user. The method includes receiving a set of data associated with the user and analyzing, using a trained model, the set of data associated with the user. Next, the method includes generating the three-dimensional avatar of the user based on the analysis of the set of data. Next, the method includes receiving at least one input from the user for selection of at least one apparel from a plurality of apparels. Next, the method includes virtually augmenting the selected at least one apparel on the generated three- dimensional avatar of the user. Thereafter, the method includes displaying, the generated three- dimensional avatar of the user with the selected at least one apparel in an augmented reality environment along with a plurality of apparel fitting parameters.

Inventors:
SINHA TARUNNA K (IN)
AGRAWAL SHRUTI (GB)
SINGH SHRUTI (IN)
GARG PARVESH (IN)
GOEL JATIN (IN)
YADAV SANJEET (IN)
Application Number:
PCT/IN2023/050765
Publication Date:
February 15, 2024
Filing Date:
August 10, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VIVIROOMS ECOMM PRIVATE LTD (IN)
BIOCUBE TECH INC (US)
International Classes:
G06Q30/0601; G06T15/00; G06T19/00
Foreign References:
US20200364533A12020-11-19
US20090144173A12009-06-04
US20110298897A12011-12-08
KR20090004326A2009-01-12
Attorney, Agent or Firm:
SAHNEY, Garima (IN)
Download PDF:
Claims:
We Claim

1. A method [200] for displaying a three-dimensional apparel on a three-dimensional avatar of a user, the method comprising: receiving, at a Processing Unit [104], a set of data associated with the user; analyzing, by the Processing Unit [104] using a trained model [104a], the set of data associated with the user; generating, by the Processing Unit [104], the three-dimensional avatar of the user based on the analysis of the set of data; receiving, by the Processing Unit [104], at least one input from the user for selection of at least one apparel from a plurality of apparels; virtually augmenting, by the Processing Unit [104], the selected at least one apparel on the generated three-dimensional avatar of the user; and displaying, by the Processing Unit [104] via a Display Unit [106], the generated three- dimensional avatar of the user with the selected at least one apparel in an augmented reality environment along with a plurality of apparel fitting parameters.

2. The method as claimed in claim 1, wherein the set of data associated with the user comprises image data of the user, body dimensions and body parameters of the user, apparel preferences of the user.

3. The method as claimed in claim 1, wherein the plurality of apparel fitting parameters comprises a degree of looseness and tightness of the apparel at one or more body points of the three-dimensional avatar of the user.

4. The method as claimed in claim 1, wherein the three-dimensional avatar of the user is generated as a virtual twin of the user with real-alike look, body shape and structure of the user.

5. The method as claimed in claim 1, wherein the plurality of apparel is available on an apparel platform, virtual wardrobe of the user. The method as claimed in claim 1, wherein the set of data associated with the user is analyzed using at least one from among machine learning based techniques, augmented reality-based techniques, image processing techniques, objection detection and recognition techniques. The method as claimed in claim 1, wherein augmenting the selected at least one apparel on the generated three-dimensional avatar of the user comprises: analysing, by the Processing Unit [104] using the trained model [104a], one or more parameters associated with the selected at least one apparel; creating, by the Processing Unit [104], the at least one three-dimensional apparel for the three-dimensional avatar of the user based on the analysis of the set of data and the one or more parameters, wherein the at least one three-dimensional apparel is created as a realalike three-dimensional replica of the apparel; and augmenting, by the Processing Unit [104], the created at least one three-dimensional apparel on the generated three-dimensional avatar of the user. The method as claimed in claim 7, wherein the one or more parameters associated with the selected at least one apparel comprises size, fabric, pattern, type, color, measurement of the apparel. The method as claimed in claim 1, wherein the three-dimensional avatar of the user with the selected at least one apparel is displayed to the user in a plurality of orientations based on user input gestures. A system for displaying a three-dimensional apparel on a three-dimensional avatar of a user, the system comprising: a Processing Unit [104], wherein the Processing Unit [104] is configured to: receive a set of data associated with the user; analyse, using the trained model [104a], the set of data associated with the user; generate the three-dimensional avatar of the user based on the analysis of the set of data; receive at least one input from the user for selection of at least one apparel from a plurality of apparels; virtually augment the selected at least one apparel on the generated three- dimensional avatar of the user; and display, via a Display Unit [106], the generated three-dimensional avatar of the user with the selected at least one apparel in an augmented reality environment along with a plurality of apparel fitting parameters. The system as claimed in claim 10, wherein the set of data associated with the user comprises image data of the user, body dimensions and body parameters of the user, apparel preferences of the user. The system as claimed in claim 10, wherein the plurality of apparel fitting parameters comprises degree of looseness and tightness of the apparel at one or more body points of the three-dimensional avatar of the user. The system as claimed in claim 10, wherein the three-dimensional avatar of the user is generated as a virtual twin of the user with real-alike look, body shape and structure of the user. The system as claimed in claim 10, wherein the plurality of apparel is available on an apparel platform, virtual wardrobe of the user. The system as claimed in claim 10, wherein the Processing unit analyzes the set of data associated with the user using at least one from among machine learning-based techniques, augmented reality-based techniques, image processing techniques, objection detection and recognition techniques. The system as claimed in claim 10, wherein to augment the selected at least one apparel on the generated three-dimensional avatar of the user, the Processing Unit [104] is configured to: analyse, using the trained model [104a], one or more parameters associated with the selected at least one apparel; create at least one three-dimensional apparel for the three-dimensional avatar of the user based on the analysis of the set of data and the one or more parameters, wherein the at least one three-dimensional apparel is created as a real-alike three-dimensional replica of the apparel; and augment the at least one three-dimensional apparel on the generated three-dimensional avatar of the user. The system as claimed in claim 15, wherein the one or more parameters associated with the selected at least one apparel comprises size, fabric, pattern, type, color, measurement of the apparel. The system as claimed in claim 10, wherein the three-dimensional avatar of the user with the selected at least one apparel is displayed to the user in a plurality of orientations based on user input gestures.

Description:
METHOD AND SYSTEM FOR DISPLAYING THREE-DIMENSIONAL

VIRTUAL APPAREL ON THREE-DIMENSIONAL AVATAR FOR REAL-TIME

FITTING

TECHNICAL FIELD

The present disclosure generally relates to the field of apparel shopping, and more particularly to a method and a system for displaying three-dimensional apparel, with the look, feel & behavior of a real-alike clothe on a three-dimensional avatar of a user, with the look, body shape, structure of the real-alike of the user, and generating virtual fitting conditions of the apparel on the actual body shape of the user.

BACKGROUND

This section is intended to provide information relating to the technical field and thus, any approach or functionality described below should not be assumed to be qualified as prior art merely by its inclusion in this section.

A few years ago, the "Try before you buy" strategy was an efficient customer engagement method in outfit stores. Currently, when consumers engage in online shopping, there is no method available to check or visualize how a particular piece of clothing would look on them, with the exact body measurements. Sizes of clothes are not standardized across brands or shops or vendors which lends even more uncertainty to the system thereby implying that a size Medium in Shop A, will not always have the same fitting as a Size Medium from Shop B. Hence, shopping using just an empirical size chart creates problems in terms of actual fitting, resulting in a bad user experience & increased return.

Additionally, everyone has different body measurement. For Example, if person A and Person B both wear a dress size Medium, they still may have very different chest, waist, and hip measurements proportions, thus having a different fit or look while wearing a particular dress. This is because each dress size has a range. For instance, the chest of a medium dress is made to fit anyone with a size between 28-30 inches. Therefore, if person A has a chest size of 28 inches, the dress will be of loose fitting to them, however, if person B has a chest size of 30 inches, they still wear a medium, but it will be a body fit dress for them. Currently, this problem may only be solved by physical trials, by trying clothes on your body at shops or trial rooms but may not be done very efficiently, accurately & in a fully interactive medium on a digital remote platform.

Real-time fitting using technology is available in various forms currently to improve the in-store shopping experience. Real-Time mirrors are installed in the fitting rooms of the store. Cameras embedded in the mirrors determine which product has been brought by the shopper into the fitting room. This data is combined with product information such as available sizes and colors to offer additional product recommendations. However, the problem that remains is that consumers must physically go to the store to try this. Effectively, this does not solve much of an additional purpose because when consumers go to physical stores, they may try the clothes directly instead of looking at the real time mirrors.

A few online e-commerce platforms are now providing real-time look options, but that does not provide a satisfactory customer experience. It is only a very simplistic superimposition technique of a cloth photo on the 2-dimensional body image of the consumers. It does not provide a truly accurate sense of fitting. Also, the customer never gets to know how the cloth would look based on skin tone, height, body fat and the like. The real body shape may be too much varied from the general dummy shapes. The process includes choosing a pre-defined body shape similar to the actual, one and choosing clothes from the catalogue. It superimposes the cloth picture on the body picture and does not create the actual replica of the body profile, nor the actual replica of the cloth in 3-dimensional. Superimposition of an image is just as good as photoshopping an image, from the looks perspective but does not solve any problem with the fitment of the cloth or, the behavior of the cloth on the human body. No indication of how the fit or any size recommendation is provided. More precisely, here are provided points that make the existing solutions cumbersome or irrelevant to the desired field. The technique used by some e-commerce platforms involves choosing a particular piece of furniture and taking a photo of the space, i.e., the use of open- source technology to create a simple 3-dimensional space from the given image with X, Y, and Z coordinates. The X, Y, and Z coordinates of the furniture are compared against the X, Y, and Z coordinates of the captured space & feedback is given. No role of cutting-edge technology is involved in this case and simple logic-based X, Y, Z dimension computation & comparison is involved. The process used by other e-commerce platforms includes taking the face Image of the user and rotating the face image left & right. It analyses the face & then spectacle frames are visualized on the face, but the end outputs are simple 2-dimensional face images with spectacles frames on that with no opportunity of changing the viewing angle, having a 3-dimensional presentation or any effective interaction. It does not provide interactive 3-dimensional replica of the human face or body.

Hence, in view of these and other existing limitations, there arises an imperative need to provide an efficient solution to overcome the above-mentioned limitations and to provide a method and system to interact with clothes or apparel virtually on the body and check the fitting of clothes in real-time, to have a near real trial experience.

SUMMARY

This section is provided to introduce certain aspects of the present disclosure in a simplified form that are further described below in the detailed description. This summary is not intended to identify the key features or the scope of the claimed subject matter.

The present disclosure provides a method and a system for displaying a three-dimensional apparel on a three-dimensional avatar of a user. One aspect of the present disclosure relates to a method for displaying the three-dimensional apparel on the three-dimensional avatar of the user. The method includes receiving, at a Processing Unit, a set of data associated with the user. Next, the method includes analyzing, by the Processing Unit using a trained model, the set of data associated with the user. Next, the method includes generating, by the Processing Unit, the three-dimensional avatar of the user based on the analysis of the set of data. Next, the method includes receiving, by the Processing Unit, at least one input from the user for selection of at least one apparel from a plurality of apparels. Next, the method includes virtually augmenting, by the Processing Unit, the selected at least one apparel on the generated three-dimensional avatar of the user. Thereafter, the method includes displaying, by the Processing Unit via a display unit, the generated three-dimensional avatar of the user with the selected at least one apparel in an augmented reality environment along with a plurality of apparel fitting parameters.

Another aspect of the present disclosure relates to a system for displaying a three-dimensional apparel on a three-dimensional avatar of a user. The system includes a Processing Unit, a Trained Model, and a Display unit. The Processing Unit is configured to receive a set of data associated with the user. Next, the Processing Unit is configured to analyze, using a trained model, the received set of data associated with the user. Next, the Processing Unit is configured to generate the three-dimensional avatar of the user based on the analysis of the set of data. Next, the Processing Unit is configured to receive at least one input from the user for selection of at least one apparel from a plurality of apparels. Next, the Processing Unit is configured to virtually augment the selected at least one apparel on the generated three-dimensional avatar of the user. Thereafter, the Processing Unit is configured to display, via a Display Unit, the generated three- dimensional avatar of the user with the selected at least one apparel in an augmented reality environment along with a plurality of apparel fitting parameters.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawing, which is incorporated herein, and constitute a part of this disclosure, illustrate exemplary embodiments of the disclosed method. Components in the drawing are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. FIG. 1 illustrates a block diagram of the system [100] for displaying apparel on a three- dimensional avatar of the user, in accordance with the exemplary embodiment of the present disclosure.

FIG. 2 illustrates a method flow diagram [200] for displaying apparel on a three-dimensional avatar of the user, in accordance with the exemplary embodiment of the present disclosure.

FIG. 3 illustrates a method flow diagram [300] for creating a three-dimensional apparel for a three-dimensional avatar of the user, in accordance with the exemplary embodiment of the present disclosure.

FIG. 4A is a diagram showing an example [400] of a three-dimensional avatar displayed with three-dimensional apparel in an augmented reality environment, in accordance with the exemplary embodiment of the present disclosure.

FIG. 4B is a diagram showing another example [400] of a three-dimensional avatar displayed with three-dimensional apparel in an augmented reality environment, in accordance with the exemplary embodiment of the present disclosure.

FIG. 4C is a diagram showing yet another example [400] of a three-dimensional avatar displayed with three-dimensional apparel in an augmented reality environment, in accordance with the exemplary embodiment of the present disclosure.

The foregoing shall be more apparent from the following more detailed description of the disclosure.

DETAILED DESCRIPTION In the following description, for the purposes of explanation, various specific details are set forth to provide a thorough understanding of embodiments of the present disclosure. It will be apparent, however, that embodiments of the present disclosure may be practiced without these specific details. Several features described hereafter can each be used independently of one another or with any combination of other features. An individual feature may not address any of the problems discussed above or might address only some of the problems discussed above. Some of the problems discussed above might not be fully addressed by any of the features described herein.

As discussed in the background section, the consumers must physically go to the store to try available sizes and colours of clothes, however this facility is not available in the case of remote shopping. Although a few online e-commerce platforms are now providing real time look options, but that does not provide a good customer experience. It is only a very simplistic superimposition technique of cloth photos on pre-defined 2-dimensional body dummies. It does not provide any sense of fitting. Also, customers never come to know how the cloth looks based on skin tone, height, body fat etc.

To overcome the problems of the related art, the present disclosure provides a solution by bringing the physical format of 'Try before buy', in the form of real-time virtual fitting rooms through a complete digital experience on the personal device of the consumer. The present disclosure provides a method and a system to implement real-time fitting, where users may try the clothes and check the fitting of clothes in real-time on a mobile or web application. One aspect of the present disclosure relates to a method for clothes or apparel fitting in augmented reality giving a sense of the real experience of correct fitting of apparel worn on the body and helping consumers to select clothes and choose the correct size more easily and conveniently.

The present disclosure provides an interactive 3-dimensional Virtual Avatar or Virtual Twin, i.e., a multi-polygon mesh-based model, created from basic inputs like face image & body dimensions using 3-dimensional mesh, and an exact lookalike of the consumer's full body is generated on the mobile/ web application with the help of Artificial Intelligence based techniques. In another aspect of the present disclosure, on the personalized 3-dimensional Virtual Twin of consumer, different types, combinations & sizes of clothes may be tried very easily, and consumers may easily check and recheck the fitting of clothes instead of any physical trials, through an interactive heat-map based visualization.

Also, not only the user will have-access to a virtual try room, available at any time on their fingertips, but they will own all the virtual assets (the exact virtual replica of the clothes they are purchasing) which create a virtual wardrobe. From the virtual wardrobe, the user may select any clothes, wear them virtually at any time, do mix & match, add fascinating near-real backgrounds, do interactive gestures, and share the looks within its connected groups.

The present disclosure first allows the user to click a face photo or a selfie on the mobile or web app or upload a pic from the gallery. The system thereafter creates a profile of the user. For instance, the user may upload any pic from the gallery. Next, the user may provide body measurements. The data includes the body dimensions of the user like, sizes of chest, waist, hip, height etc. Next, the present system processes the input parameters associated with the selected photograph & the provided body dimensions to create a three-dimensional virtual twin, i.e., a multi-polygon mesh-based model of the user. The three-dimensional model is created using Augmented Reality & Artificial Intelligence (deep neural network). A true lookalike i.e., the 3- dimensional virtual twin of the user is created so that the user may visualize itself in the application with face & same body shape or dimensions. Thereafter, the present disclosure displays the 3D clothes/ assets available on the platform, that share the same fabric properties, measurements and clothing physics as the real-world cloth they are representing. Further, the customer browses clothes from the website or app product catalogue and selects the cloth & the sizes of cloth. The customer may select and call any clothes or apparel to the "Fitting Room" (a virtual, 3-dimensional trial room environment) where the customer's virtual twin is available. In the "Fitting Room", the customer's 3-dimensional virtual twin "try on" the chosen clothes. The clothes tried are viewed in 3-dimensional but said clothes are not put on the virtual twin like an image, rather the clothes have physics and dynamics specified to them, and they are morphed onto a "3D virtual twin" body. This enables the clothes to show the correct fitting, i.e., stretching on the "3D virtual twin" body when the size of the clothes is small, or the fitting of the clothes is tight. Similarly, the 3-dimensional clothes look loose and dynamic when the cloth loosely fits on the "3D virtual twin" body. Additionally, the clothes appear loose or tight on different parts of the body, as per the customer's body measurements input. This means that the clothes may look tight fit from the chest, comfortably fit in the waist and loosely fit on the hips, all at the same time, depending on the body measurement and cloth size. Artificial Intelligence (Al) & Augmented Reality (AR) based trained model checks this looseness or tightness at numerous body points through computational dynamics simulation. It further visualizes a relative differential of apparel fitting with interactive & precise heatmap-based virtualization. Here, the variable colour coding & intensity indicates the degree of looseness or tightness and the like. Here, the user can also Mix and Match different pieces of clothing to create a complete outfit. For example, if the user likes a white top, the user can try on the jeans, and if the user doesn't like the look of the complete outfit, the user can then try the black skirt, and choose to purchase that instead.

However, no such features are available on other existing platforms. Clothes once put on the 3- dimensional body, the model may be rotated & viewed from a multi-angle or direction to check how the cloth looks on the personalized body of the user & how the fitment or looseness is. The present disclosure further includes an aspect based on the tightness or looseness & overall visual look of the fitment of the clothes on the body, the user will try out different sizes, and different combinations of clothes. Finally, after looking at the perfect fit and good matching based on skin colour and body size, the user will buy the clothes.

As used herein, a "3D virtual twin" or "three-dimensional avatar", "3-dimensional" which is a multi-polygon mesh-based skin, eyes, and face has actual user's similar looks in addition to having the body measurements and body shape. The mobile or web app may include but is not limited to, a mobile phone app, smartphone app, laptop app, a general-purpose computer app, desktop app, personal digital assistant app, tablet computer app, wearable device or any other computing device app which may implement the features of the present disclosure and is obvious to a person skilled in the art.

As used herein, a 3-dimensional apparel is "worn' by the users personalized 3-dimensional avatar. The 3-dimensional apparel is a realistic replica of the physical clothes being sold on various platforms/apps/ websites. The 3-dimensional apparel is made using the same sizes, fabrics, patterns, measurements of the physical clothes. These are further referred to as virtual assets.

As used herein, "User Equipment", "user device" and/or "communication device", may be any electrical, electronic, electromechanical and computing device or equipment, having one or more transceiver units installed on it. The communication device is a battery-powered device. The communication device may include but is not limited to, a mobile phone, smartphone, laptop, general-purpose computer, desktop, personal digital assistant, tablet computer, wearable device or any other computing device which is capable of implementing the features of the present disclosure and is obvious to a person skilled in the art.

As used herein, a "Transceiver Unit" may include but is not limited to a transmitter to transmit data to one or more destinations and a receiver to receive data from one or more sources. Further, the Transceiver Unit may include any other similar unit obvious to a person skilled in the art, to implement the features of the present disclosure.

As used herein, a "Processing Unit" or "processor" includes one or more processors, wherein processor refers to any logic circuitry for processing instructions. A processor may be a general- purpose processor, a special-purpose processor, a conventional processor, a digital signal processor, a plurality of microprocessors, one or more microprocessors in association with a DSP core, a controller, a microcontroller, Application Specific Integrated Circuits, Field Programmable Gate Array circuits, any other type of integrated circuits, etc. The processor may perform signal coding data processing, input/output processing, and/or any other functionality that enables the working of the system according to the present disclosure. More specifically, the processor or processing unit is a hardware processor.

As used herein, "trained model" includes an Artificial Intelligence (Al) based model and Augmented Reality based model. The trained model is subjected to learning using data received from various sources. In an embodiment of the present disclosure, the processing unit uses the trained model to implement the features of the present invention.

As used herein, a "Display unit" or "display" includes one or more computing devices for displaying applications to a user in accordance with the present disclosure. The display unit may be additional hardware coupled to the said electronic device or may be integrated within the electronic device. The display unit may further include but is not limited to CRT display, LED display, ELD display, PDP display, LCD display, OLED display and the like.

As used herein, "Storage Unit" refers to a machine or computer-readable medium including any mechanism for storing information in a form readable by a computer or similar machine. For example, a computer-readable medium includes read-only memory ("ROM"), random access memory ("RAM"), magnetic disk storage media, optical storage media, flash memory devices or other types of machine-accessible storage media.

The present disclosure is further explained in detail below with reference to the diagram.

FIG. 1 illustrates a block diagram of the system [100] for displaying a three-dimensional apparel on a three-dimensional avatar of the user, in accordance with the exemplary embodiment of the present disclosure. As shown in Fig. 1, the system [100] includes at least one Transceiver Unit [102], at least one Processing Unit [104], at least one Trained Model [104a], at least one Display Unit [106] and at least one Storage Unit [108], wherein all the components are assumed to be connected to each other unless otherwise indicated below. Also, in Fig. 1 only one Transceiver Unit [102], one Processing Unit [104], one Trained Model [104a], one Display Unit [106], and one Storage Unit [108] are shown, however, the system [100] may comprise multiple such units and modules or the system may comprise any such numbers of said units and modules, as may be required to implement the features of the present disclosure. Also, there may be one or more sub-units of said units and modules of the system [100] and the same is not shown in Fig. 1 for clarity.

The system [100] includes the Processing Unit [104], The Processing Unit [104] is connected to the Transceiver Unit [102], The Processing Unit [104] is configured to receive a set of data associated with a user via the Transceiver Unit [102], In an embodiment of the present disclosure, the data associated with the user includes image data of the user, body dimensions and body parameters of the user, and apparel preferences of the user. In an example, the user uploads the image of the face of the user on the system after capturing it using a camera unit. In another example, the user uploads the image already stored on the device of the user. In an embodiment of the present disclosure, the processing unit is configured to create a profile of the user based on the set of data.

Next, the Processing Unit [104] is configured to analyse, using the trained model [104A], the set of data associated with the user. The set of data associated with the user is analysed using at least one from among machine learning-based techniques, augmented reality-based techniques, image processing techniques, objection detection and recognition techniques.

Next, the Processing Unit [104] is configured to generate the three-dimensional avatar of the user based on the analysis of the set of data. The three-dimensional avatar of the user is generated as a virtual twin of the user with real-alike look, body shape and structure of the user.

Next, the Processing Unit [104] is configured to receive at least one input from the user for selection of at least one apparel from a plurality of apparels. In an embodiment of the present disclosure, the plurality of apparel is available on an apparel platform, virtual wardrobe of the user. Next, the Processing Unit [104] is configured to virtually augment the selected at least one apparel on the generated three-dimensional avatar of the user. To augment the selected at least one apparel on the generated three-dimensional avatar of the user, the Processing Unit [104] is configured to first analyse, using the trained model, one or more parameters associated with the selected at least one apparel. In an embodiment, the one or more parameters associated with the selected at least one apparel include, but are not limited to, size, fabric, pattern, type, color, measurement of the apparel. Next, the Processing Unit [104] creates at least one three- dimensional apparel for the three-dimensional avatar of the user based on the analysis of the set of data and the one or more parameters. The at least one three-dimensional apparel is created as a real-alike three-dimensional replica of the apparel. Thereafter, the Processing Unit augments the at least one three-dimensional apparel on the generated three-dimensional avatar of the user.

Next, the Processing Unit [104] is connected with the Display Unit [106] to display the generated three-dimensional avatar of the user with the selected at least one apparel in an augmented reality environment along with a plurality of apparel fitting parameters. The plurality of apparel fitting parameters comprises the degree of looseness and tightness of the apparel at one or more body points of the three-dimensional avatar of the user. The three-dimensional avatar of the user with the selected at least one apparel is displayed to the user in a plurality of orientations based on the user input gestures.

The system [100] stores the Storage Unit [108] to store the set of data and the data required to further train the trained model. In a non-limiting embodiment, the Storage Unit [108] is configured to store the data required for implementing the features of the present invention.

FIG. 2 illustrates a method flow diagram [200] for displaying apparel on a three-dimensional avatar of the user, in accordance with the exemplary embodiment of the present disclosure. As shown in Fig. 2, the method begins at step [202], At step [204], the method includes receiving, by a Processing Unit [104], a set of data associated with the user. The Processing Unit [104] may communicate with Transceiver Unit [102] to receive the set of data. In a non-limiting embodiment, the method includes creating, by the processor, a profile of the user based on the received set of data. In a non-limiting embodiment, the set of data associated with the user includes image data of the user, body dimensions and body parameters of the user, and apparel preferences of the user.

In an exemplary embodiment, User A provides input through smartphone which includes body measurement and one or more photographs clicked through the same smartphone. In such a case, a profile of user A named "profile_A" is created which holds all the body measurements and the photographic images of the user.

At step [206], the method includes analyzing, by the Processing Unit [104] using a Trained Model [104a], the set of data associated with the user. The set of data associated with the user is analyzed using at least one of among machine learning-based techniques, augmented realitybased techniques, image processing techniques, objection detection and recognition techniques.

At step [208], the method includes generating, by the Processing Unit [104], the three- dimensional avatar of the user based on the analysis of the user data. The three-dimensional avatar of the user is generated as a virtual twin of the user with real-alike look, body shape and structure of the user, based on the received set of data associated with the user. In an exemplary embodiment, a three-dimensional avatar is created based on the details provided by the user A which have already been mentioned in the profile of A, i.e., profile_A.

At step [210], the method includes receiving, by the Processing Unit [104], at least one input of the user for the selection of at least one apparel from a plurality of apparel. The plurality of apparel may include the apparel being sold on various apparel platforms/apps/ websites and virtual wardrobe of the user. In an exemplary embodiment, a catalogue of apparel is shown to user A and user A selects one or more apparel from the catalogue based on the preference of the user. In an exemplary embodiment, the system also recommends a plurality of apparels to the user based on the analysis of the set of data received from the user. In an example, the system learns from the user behavior of purchasing apparel and accordingly recommends a set of apparels based on body dimension parameters of the user.

At step [212], the method includes augmenting, by the Processing Unit [104], the selected at least one apparel on the generated three-dimensional avatar of the user. As used herein, the term "augmenting" refers actual wearing of the apparel on the three-dimensional avatar of the user to get the real feel of the apparel and to get a sense of the real experience of correct fitting of apparel worn on the body. For instance, augmenting the shirt on the model of the user reflect that the user actually wearing the selected shirt. The augmentation of the selected at least one apparel on the generated three-dimensional avatar of the user includes the step of analysing, by the processing unit using the trained model, one or more parameters associated with the selected at least one apparel. The one or more parameters associated with the selected at least one apparel includes, but are not limited to, size, fabric, pattern, type, color, measurement of the apparel. Next, the method includes creating, by the processing unit, at least one three- dimensional apparel for the three-dimensional avatar of the user based on the analysis of the one or more parameters and the set of data. Finally, the method includes augmenting, by the processor, the at least one three-dimensional apparel on the generated three-dimensional avatar of the user. The at least one 3-dimensional apparel is also referred to as the at least one apparel selected by the user.

At step [214], the method includes displaying, by the Processing Unit [104] via a Display Unit [106], the generated three-dimensional avatar of the user with the selected at least one apparel in an augmented reality environment. Here, the three-dimensional avatar of the user with the selected at least one apparel is displayed to the user with a plurality of apparel fitting parameters. The plurality of apparel fitting parameters includes the degree of looseness and tightness of the apparel at one or more body points of the three-dimensional avatar of the user. In an embodiment of the present disclosure, display of the three-dimensional avatar of the user with the selected at least one apparel corresponds to display of the virtual twin (virtual avatar) of the user in a virtual fitting room where the user can really feel how the user will look like in the real world. In a non-limiting embodiment, the three-dimensional avatar of the user with the selected at least one apparel is displayed to the user in a plurality of orientations based on the user input gestures. The plurality of orientations corresponds to a 360-degree view of the avatar wearing the selected apparel. The user input gestures include, but are not limited to, scroll, zoom in, zoom out, pinch in, pinch out, rotate, resize, snapshot, vertical view, horizontal view, 3d view. For instance, the user can check the apparel fitting from various angles using the user input gestures. In an exemplary embodiment, a three-dimensional avatar of user A wearing a blue dress is displayed along with the parameters indicating that the dress is loose at the shoulders and body- fit at stomach of the user A.

In an exemplary embodiment, user A provides the inputs as image along with the body dimensions, and "T-shirt" as apparel preference. The above set of data of user A is analyzed. A three-dimensional avatar of user A is generated based on the above analysis of the user data. User A provides an input to select a yellow T-shirt from a collection of T-shirts. The selected yellow T-shirt is augmented on the above generated three-dimensional avatar of the user A. The generated three-dimensional avatar of the user augmented with the selected yellow T-shirt. The generated three-dimensional avatar of the user is identical to the user appearance.

In an exemplary embodiment, the method includes the step of creating a personalised avatar, a virtual lookalike or a digital twin of the user. The personalized avatar has the same face, skin colour, body measurements and dimensions of the user using. Next, the method includes the step of creating a 3-dimensional apparel that is made to the sizes style and patterns of the virtual apparel selected by the user. Next, the personalized avatar wears the created 3-dimensional apparel. More particularly, the created 3-dimensional apparel is augmented on the personalized virtual avatar. Next, the present system, using the Augmented Reality based model, displays the fitting of the created 3-dimensional apparel on the virtual twin. The method terminates at step [216], FIG. 3 illustrates a method flow diagram [300] for creating a three-dimensional apparel for a three-dimensional avatar of the user, in accordance with the exemplary embodiment of the present disclosure. As shown in Fig. 3, the method begins at step [302],

At step [304], the method includes analysing, by the Processing Unit [104] using the trained model [104a], one or more parameters associated with the selected at least one apparel. The one or more parameters associated with the selected at least one apparel includes, but are not limited to, size, fabric, pattern, type, color, measurement of the apparel.

At step [306], the method includes creating, by the Processing Unit [104], at least one three- dimensional apparel for the three-dimensional avatar of the user based on the analysis of the one or more parameters and the set of data. The at least one three-dimensional apparel is created as a real-alike three-dimensional replica of the apparel.

At step [308], the method includes augmenting, by the processor, the at least one three- dimensional apparel on the generated three-dimensional avatar of the user. The method terminates at step [310],

FIG. 4A is a diagram showing an example of a three-dimensional avatar displayed with three- dimensional apparel in an augmented reality environment, according to an embodiment of the present disclosure. Once the three-dimensional avatar of the user is created by the system, the user then may choose to try on any of the virtual garments present on the platform or in the user wardrobe. Each virtual garment is assigned various attributes such as fabric type, color, texture, and design details. When a user selects a virtual garment to try on, the system generates the three-dimensional apparel for the three-dimensional avatar of the user. In a non-limiting embodiment, the garment simulation algorithm uses physics-based modelling to drape the selected virtual garment onto the 3D avatar realistically. The algorithm considers the fabric properties, such as elasticity, stiffness, and collision detection, to ensure that the garment behaves like real clothing. The user may interact with the interface to adjust the position and fit of the garment on the avatar. The user can use input gestures such as zoom in, zoom out, pinch in, pinch out, and the like to resize, move, and rotate the garment to see how the garment will look on themselves in real life.

As shown in FIG. 4A, the user can select any of the available apparel to try on the three- dimensional avatar. After selecting the apparel, the user can see how the apparel will look on the user. For instance, the user may try an option of Fit View to check the apparel fitting parameters when a particular size of the apparel is selected by the user. Accordingly, the user can choose the appropriate size based on the display of the apparel on the three-dimensional avatar of the user. The user also gets an option of adding the selected apparel in the basket so that the same can be purchased by the user at any point of time. The system also displays text on the interface in an event the apparel is loose or tight at any point of the body parts. As also shown in Fig. 4A, the system displays a text "THAT'S TOO LOOSE, TRY A DIFFERENT SIZE" in an event the apparel seems loose on the body part of the avatar. In an embodiment of the present disclosure, displaying the apparel fitting parameters and suggesting the size also increases the sale of the apparel.

FIG. 4B is a diagram showing another example of a three-dimensional avatar displayed with three-dimensional apparel in an augmented reality environment, according to an embodiment of the present disclosure. In Fig. 4B, a back of the three-dimensional avatar of the user is displayed with a top and the jeans. The user can check the fitting of the apparel from any side of the body such as from front side and back side. Accordingly, the user can check if the jeans is tight or loose at any body point and therefore may decide to try a different size of the apparel. In an example, the jeans may be shown tight at the hip but seems fine on the waist. Accordingly, the user can decide other sizes or other brands of apparel.

FIG. 4C is a diagram showing yet another example of a three-dimensional avatar displayed with three-dimensional apparel in an augmented reality environment, according to an embodiment of the present disclosure. As also shown in Fig. 4, the system has the capability to highlight the fitting parameters by adding wrinkles and creases to the apparel. If a cloth is very loose it will have more creases, however if the cloth is very tight, it will be stretched against the body and hence will have no or very less creases.

In a non-limiting embodiment, fitting of the apparel on the three-dimensional apparel may be highlighted by a text, a pop up, a notification, annotations, specific color, and the like. In an example, when the clothing is too tight on the body of the avatar, that section becomes a darker colour. So, if the body if of a size medium, and the cloth "worn" by the avatar is a size S, and the top is tight from the chest of the avatar, that section of the avatar is highlighted in a darker colour. Similarly, if the cloth is loose on the avatar body, that section of the cloth changes to lighter colour to identify that it is of a lose fitting.

As evident from the above disclosure, the present solution provides significant technical advancement over the existing solutions. The present disclosure is advanced over the techniques present in the prior art in view of the following aspects. a. A truly interactive, true lookalike & 3D virtual twin of the human body is created. User may visualize in the application with face & same body shape or dimension. b. Creating a 3-dimensional body with the same structure, face, and skin tone involves a lot of complex Al logic with deep CNN capabilities & creating a multi-polygon mesh-based model. Here, simple 3-dimensional space management & computation is not involved. c. Checking of the apparel fitment is performed on an actual 3-dimensional cloth virtually avatar of the user. d. Cloth is not a simple 3-dimensional solid object with definite X, Y, and Z dimensions. There are a lot of nuances in the fitment and behavior of the cloth on the actual body based on the design, fabric type, stitch pattern, yarn direction etc., which defines the physics of the clothes. In an exemplary embodiment, a 42" cotton shirt & 42' woollen cloth probably has the same absolute dimension but behave & fit completely differently on the same human body. e. In the present invention, the clothes once put on the human body, the model may be rotated & viewed from a multi-angle or direction to see how the cloth looks on the personalized body of the user & how is fitting or loose. f. The trained model checks the looseness or tightness at numerous body points through computational dynamics simulation and visualizes a complete profile of garment fitting through a heatmap form where the variable colour coding & intensity indicates the degree of looseness or tightness or perfect fit etc. No such features are available on other existing platforms.

With modern technology, the advantage is that the virtual twin or the individual people will 'wear' the dress and provide a visual representation of the fitting of any clothing on the individual's body.

• The 3D virtual twin created has the exact lookalike face, eyes, and hair of the consumer. This makes it easier for the consumers to visualize body and face with the clothes.

• The 3D virtual twin has personalized body measurements. This shows the customer the exact body shape, not just catalogue mannequin sizes and hence make better buying decisions.

• The 3D virtual twin has skin colour, and tone amongst other parameters, the same as the consumer's real skin colour, this makes it easier to select the colours and patterns of clothes that suit the customer.

• By having a 3-dimensional version of your body made, the clothes tried on the body in real-time may now show how a piece of clothing looks exactly on your body. For Example, if person A and Person B both wear a Dress size medium, they still may have vastly different chest, waist, and hip measurements, thus having a different fit/ look while wearing said dress. In this example, for Person A the dress looks fitted from the chest and loose from the hips, and for Person B the dress looks loose at the chest and fitted on the hips. These minute details impact the decision of the shopper when buying the clothes.

• Clothes are available in 3-dimensional using artificial intelligence with a real look & feel (with the customized fabric properties & physics) which makes the cloth behave exactly the way it should in the real world on a human body. • The customer may select clothes to take to the "fitting room" (a mix of a 2-dimensional & 3- dimensional Environment) where the customers created virtual twins.

• In the Fitting room, the customer's virtual twin will "Try On" the clothes chosen.

• The clothes seen here are viewed in 3-dimensional. The clothes are not put on the virtual twin as an image, rather the clothes have physics and dynamics to them, and they are morphed onto 3D the virtual twin.

• Viewing the clothes on the 3D virtual twin may help the consumer visualize the clothing style on the body and provides the user with wider options to experiment with and create a personal style. This helps online shopping become more accurate and enjoyable for the customer as they may be certain of the purchases made.

• This enables the clothes to show the correct fitting - i.e., stretching on the Virtual Twin body when the size of the clothes is small, or the fitting of the clothing item is tight.

• Similarly, the 3-dimensional clothes look loose and dynamic when the item of clothing is loosely fitted on the Virtual Twin body.

• Additionally, the clothing appears loose or tight on different parts of the body, as per the customer's body measurements inputted. This means that the clothing may look tight-fitted from the chest, comfortably fitted in the waist and loosely fitted on the hips, all at the same time, depending on the body measurement and clothes size.

• Reduced Returns - due to consumers purchasing after having tried on the clothes on personal Virtual Twin with body measurements. Having a visual representation of the clothes on a 3- dimensional body of the customer will result in a reduction in the number of returns per customer, as research suggests one of the main causes of returns is the incorrect fitting of clothes. This provides the consumers with a clearer idea of the fit of the clothes on body (which is usually the main reason for returns.) and overall, more confidence in the purchasing decision for online shopping.

• Also, the user will maintain virtual apparel assets for lifetime in the virtual wardrobe, which they may wear anytime virtually, mix-match, interact, capture, and share, which gives a truly immersive experience. While considerable emphasis has been placed on the disclosed embodiments, it will be appreciated that many embodiments may be made and that many changes may be made to the embodiments without departing from the principles of the present disclosure. These and other changes in the embodiments of the present disclosure will be apparent to those skilled in the art, whereby it is to be understood that the foregoing descriptive matter to be implemented is illustrative and non-limiting.