Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VIRTUALLY TRYING CLOTHS & ACCESSORIES ON BODY MODEL
Document Type and Number:
WIPO Patent Application WO/2020/104990
Kind Code:
A1
Abstract:
A method for generating body model of person wearing a fit cloth includes, receiving a user input related to a person, processing the user input to determine an essential body shape /size information which is most suitable based on user input, procuring the body model from a body model database or generating in run time, wherein the body model can be in any orientation while being viewed on a display device; procuring a fit cloth image from a fit cloth database or generating in run time based on the orientation of the body model, wherein the fit cloth image is generated from processing of a 3D model of the fit cloth and a texture, the texture is prepared by using a cloth image; combining the body model of the person and the image of the fit cloth to show the body model of the person wearing the fit cloth.

Inventors:
VATS NITIN (IN)
Application Number:
PCT/IB2019/060033
Publication Date:
May 28, 2020
Filing Date:
November 21, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VATS NITIN (IN)
International Classes:
G06T13/40; G06T19/20
Foreign References:
US20170352091A12017-12-07
US20110298897A12011-12-08
Attorney, Agent or Firm:
SINGHAL, Gaurav (IN)
Download PDF:
Claims:
I Claim:

1. A method for generating a body model of a person wearing a fit cloth comprising:

- receiving a user input related to a person, wherein the user input comprises either one or more images of the person from which body details can be extracted, or a body related information comprising atleast one of a body measurement information of the person relates to information regarding size of various body parts of the person, a skin tone of the person, a face type of the person, a hairstyle of the person, or a weight or a combination thereof;

- processing the user input to determine an essential body shape /size information which is most suitable based on user input,

- procuring the body model from a body model database or generating in run time, wherein the body model can be in any orientation while being viewed on a display device;

- procuring a fit cloth image from a fit cloth database or generating in run time based on the orientation of the body model, wherein the fit cloth image is generated from processing of a 3D model of the fit cloth and a texture, the texture is prepared by using a cloth image ;

- combining the body model of the person and the image of the fit cloth to show the body model of the person wearing the fit cloth,

wherein the body measurement information comprises at least one of height of the person, weight of the person, size of the hips of the person, size of the bust or chest of the person, size of cups, or a shape type of the person’s boy, or combination thereof.

2. The method according to the claim 1 , wherein the fit cloth is adapted to provide an impression of a lose fitting, a tight fitting, or a perfect fitting while the body model of the person is shown wearing the fit cloth.

3. The method according to any of the claims 1 or 2, wherein the fit cloth image is is generated in more than one shape/size using the same texture. 4. The method according to any of the claims 1 to 3, wherein the fit clothes of different shape/size are adapted to be generated using same set of textures for all shapes/sizes and one or more computer graphics meshes.

5. The method according to any of the claims 1 to 3, wherein the fit clothes of different shape/size are adapted to be generated by making same UV map for clothes of different shape/size as a texture while using different computer graphics of the fit cloths of different shape/size.

6. The method according to the claim 5, wherein the fit cloth is adapted to be generated using one computer graphic cloth or using multiple set of computer graphic cloth pieces, and virtually stitching the pieces into the fit cloth.

7. The method according to the any of the claims 1 to 6, wherein the image of the body model with the fit cloth is shown in static or dynamic environment or AR/VR or mixed reality-based environment.

8. The method according to any of the claims 1 to 7 comprising:

- receiving and processing a beautification input, and based on the processing applying a makeup onto a face of the body model, or change a hairstyle of the body model.

9. The method according to any of the claims 1 to 8 comprising, processing the image of the person to identify atleast the body measurement information of the person, the skin tone, the face type of the person, the hairstyle of the person, or the weight or combination thereof.

10. The method according to any of the claims 1 to 9 comprising:

- receiving and processing a fit type input, wherein the fit type input is related to fitting of the cloth on the user which can vary from a tight fit to a loose fit onto the body of the person; and

- based on processing, procuring the fit cloth from a fit cloth database based on the orientation of the body model and the fit type user input, wherein the fit cloth is generated by processing an image of a predefined cloth based on the geometry of the body model; and - combining the body model of the person and the image of the fit cloth to show the body model of the human wearing the fit cloth, with an appropriate fitness as requested by the person.

11. The method according to the claim 10, wherein the predefined image of the cloth is adapted to be collected from a photoshoot of a person or a mannequin.

12. The method according to any of the claims 1 to 11, wherein the cloth is having a level of transparency, and accordingly the image of the fit cloth is having an appropriate level of transparency, wherein combining the body model of the person and the image of the fit cloth to show the body model of the person wearing the fit cloth with the appropriate level of

transparency of the cloth.

13. The method according to any of the claims 1 to 12 comprising:

- receiving a dress input wherein dress input relates to wearing of multiple clothes, each cloth refers to a different category; and

- procuring more than one fit cloth images relevant to each of the clothes in the category; and

- combining the body model of the person and the images of the fit clothes, in a predefined order, to show the body model of the person wearing the fit clothes in layers.

14. The method according to any of the claims 1 to 13, wherein procuring the body model in a particular posture, and combining the body model in the particular posture and the image of the fit cloth to show the body model of the person wearing the fit cloth in the particular posture.

15. The method according to any of the claims 1 to 14 comprising:

- receiving and processing a fit suggestion input related to best suitable dress and/or makeup for the person;

- processing the fit suggestion input based on one or more parameters including looks of the user, a body shape or size, amount of money the person want to spend, a geography where the user belongs to, one or more events and timings, a weather related information, an information related to previous buying habits, a forecasting based on mathematical tools, a recommendation from another person; and - based on processing, making a suggestion either showing one or more dresses and/or makeup, or showing the body model wearing the suggested dresses/es and/or makeup/s.

16. The method according to any of the claims 1 to 15 is implementable as a trial room on a web portal, or a mobile application, or a desktop application.

17. The method according to the claim 16, wherein the trial room comprises a virtual mirror, and a reflection the body model of the person wearing the fit cloth is shown in the virtual mirror.

18. The method according to any of the claims 16 or 17, wherein the trial room is adapted to run on a separate server

19. The method according to any of the claims 1 to 18, wherein showing the body model of the person wearing the fit cloth along with a color spectrum or texture spectrum over the body model wearing the fit cloth, which is representative of a tightening or loosing of the fabric of the fit cloth on body.

20. The method according to any of the claims 1 to 19 comprising, receiving and processing a sharing input for sharing the body model of the person wearing the fit cloth onto a social media platform, accordingly sending and posting the body model of the person wearing the fit cloth onto the social media platform.

Description:
VIRTUALLY TRYING CLOTHS & ACCESSORIES ON BODY MODEL

FIELD OF INVENTION

The present invention relates generally to the field of computer vision processing, particularly to a method and system for generating a realistic body model of user in his/her shape, size, skin tone and similar features and virtually wearing of cloths & accessories.

BACKGROUND

Till now, when we do online shopping, the clothes are shown in an un-worn state. A user cannot perceive by just seeing the cloth about how he/she is going to look in that particular cloth.

However, to provide some experience, the online sellers have started showing the cloth being worn by a model. However, the model has particular body measurement and curves, which may be quite different with the user’s body measurement and curves. It is generally difficult to perceive for a user to match his/ her size for wearing particular sized clothes, by just seeing the model wearing a predefined size of cloth.

Also, a particular sized cloth looks quite different on each person and it is quite dependent on user’s perception whether he likes the cloth after wearing it or not. Many times, it happens, a user selects multiple dresses to try them out to identify whether he likes them wearing or not. It generally happens that, even though the size and fittings are good, still user rejects a particular cloth because he does not like it wearing. Such experience cannot be provided by any model wearing the particular sized cloth or any other sized cloth.

One of the reasons for not liking the cloth after wearing it is that each cloth has different contrast or match with a particular skin tone. Even the facial features change the feel of wearing the cloth. Also, each body has different curves and colour pattern differs on a particular cover. This change in colour pattern on different curves on the user body also changes liking and un-liking of the cloth by a user. The best perception by user can be made when the user can have an experience related to seeing the cloth being worn in similar body measurement, body curves, skin tone and facial features.

Further, the user will like to replicate his/her wearing experience on different type of clothes and at different stores.

To give a feel to a user of wearing clothes, few methods have been known where:

• 3D models of a person are generated which is a time-consuming affair. After generating the 3D model, 3D clothes are put on the 3D model, which gives an unrealistic feel.

• User uploads a photo and further image of a cloth is dragged onto the photo. However, it creates problems with respect to, zoom in-zoom out of clothes onto the photo, placing of the cloth is not accurate, multiple layers of cloths onto the photo is generally not possible. Additionally, this method is uncomfortable to the user when using a non-touch screen medium.

Too make a system able to generate the body model of users is one of the most complex processes. User's body varies in more than 200,000 shapes and size. For example, women body varies in height, weight, bust type such as; Asymmetric, Bell Shape, East West, Side Set, Slender, Tear Drop, Round and in bust size and cup size also. It also varies in hip shape & size as well as in overall body shape such as Hourglass, Bottom hourglass, Top hourglass, Spoon, Triangle, Inverted triangle, Rectangle type. Also, it varies in torso, leg and hand length that is needed to be considered while producing body model of users.

In current times, the body model is generated by either maintaining the data base of pre-generated body models of all shapes & sizes or by making few 3D models whose shape can be changed in run time while in some cases the model can be generated in parts and different parts (torso, leg, hands etc) can be added to generate the model. It is difficult to guess the shape & size of all human or find a function to define the change between one parameter to another so experiment with real human is required to take the body sizes. Also, computer generated colours can't match with real skin, so real photographs or a mixture approach of real photos and computer colouring can be used to generate the texture for skin of virtual models of users.

It is not practically possible to get elaborated and detailed data of body of each shape & size, even if a complete scan of the bodies are made. Techniques like parametric modelling of three- dimensional body permits robust reconstruction of complete three-dimensional body shapes even from incomplete data.

However, scanning techniques results in gaps and missing regions due to occlusion and inaccessibility of certain body areas. Till now, various parametric modelling techniques are developed for a wide variety of three-dimensional body processing tasks. These techniques have capabilities for developing a range of identity-dependent body shapes, and to deform them naturally into various poses.

It is required to clarify similarities and differences between existing methods for understanding their relationships and also to consider the variety of three-dimensional body shape processing applications that benefit from parametric modelling.

Another way to make a body model is to photo shoot a human of different shapes and sizes from all possible angles and in different orientations, and hence to get a database of women of all shape/size.

One other way out is to make a computer-generated model through computer graphics, which can be done in both two-dimension, and three-dimension.

It is pertinent to be noted that acquisition of 3D data can be easily procured through personal mobiles of the user, or through any third-party resource online. Certain areas of body have different requirements, and same is met through 3D modelling by deformations in those specific areas. Also, current focus is moving to user driven animation in real time. Modelling of human body is performed by way of doing 3D geometry and appearance. However, modelling the human body in 3D is still a big endeavour. The biggest limitation of current time is that the modelling can be done for the aspects of human shape which can be captured. This limitation persists as current technologies are not equipped to capture whole body real-time range data. Further, such capturing of data has limitations in real life scenario, such as capturing dressed people in natural environments.

Other challenges of capturing three-dimensional data of a person, is that there is presence of holes, like nostrils and various other unordered points. Additionally, challenge is to capture human body in a single snapshot, while the shape of the human body changes with certain instantaneous activities, like motion, breathing, and certain continuous activities, like aging.

A three-dimensional body model represents a real-life body using a collection of points in three- dimensional space, connected by various geometric entities such as triangles, lines, curved surfaces, etc. Being a collection of data, 3D body models can be created manually, using modelling techniques, or by simply scanning. Their surfaces may be further defined with texture mapping.

There are three popular ways to represent a model:

• Polygonal modelling, where points in three-dimensional space, called vertices, are connected by line segments to form a polygon mesh. The vast majority of 3D models today are built as textured polygonal models, because they are flexible and because computers can render them so quickly. However, polygons are planar and can only approximate curved surfaces by using many of the polygons.

• Curve modelling, where surfaces are defined by curves, which are influenced by weighted control points. The curve follows the points. Increasing the weight for a point will pull the curve closer to that point. Curve types include nonuniform rational B -splines, splines, patches, and geometric primitives

• Digital sculpting is of three types: • Displacement, which uses a dense model, often generated by subdivision surfaces of a polygon control mesh, and stores new locations for the vertex positions through use of an image map that stores the adjusted locations.

• Volumetric, which is loosely based on voxels, has similar capabilities as displacement but does not suffer from polygon stretching when there are not enough polygons in a region to achieve a deformation.

• Dynamic tessellation is similar to voxel but divides the surface using triangulation to maintain a smooth surface and allow finer details. These methods allow for very artistic exploration as the model will have a new topology created over it once the models form and possibly details have been sculpted. The new mesh will usually have the original high-resolution mesh information transferred into displacement data or normal map data if for a game engine.

To achieve seamless images stitching for three-dimensional human model, a Markov Random Field model with respect to colour images and triangular meshes of the surface is build. To match the colour content at the boundary of adjacent meshes, a 2D translation coordinate of colour image, as an adaptive iterative factor, is introduced into the optimization on the basis of era-expansion optimization for this Markov Random Field model. It takes care for the misalignment of adjacent colour images caused by the inaccuracy of depth data and multi-view misregistration. There can be small illumination variations between different colours which is corrected using Poisson blending to a composite vector field in a radiant domain. The model’s surface is parameterized and projected onto a 2D plane to correct for the blank regions. After that, the K-Nearest neighbour algorithm is applied to fill up the blank regions with texture contents.

A series of bones is constructed which represents the skeletal structure to animate a 3D model in different poses. For instance, in a character, there may be a group of backbones, a spine, and head bones. Digital animation software to transform their position, rotation etc.

Dynamic 3D clothing is used for dressing 3D characters. Most models of cloth are based on "particles" of mass connected in some manner of mesh. Newtonian Physics is used to model each particle through the use of a physics engine. There exist various methods for making 3D cloths which includes:

• A geometric technique which focuses on approximating the look of cloth by treating cloth like a collection of cables and using Hyperbolic cosine curves. Because of this, it is not suitable for dynamic models but works very well for stationary or single-frame renders.

• Another technique treats cloth like a grid work of particles connected to each other by springs. Whereas the geometric approach accounted for none of the inherent stretch of a woven material, this physical model accounts for stretch (tension), stiffness, and weight:

• A particle technique which takes the physical methods a step further and supposes that we have a network of particles interacting directly. Rather than springs, the energy interactions of the particles are used to determine the cloth’s shape.

A texture map is an image applied to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3D model formats or material definitions, and assembled into resource bundles.

Texture maps may be acquired by scanning/digital photography, authored in image manipulation software such as GIMP, Photoshop, or painted onto 3D surfaces directly in a 3D paint tool.

The Other way of making 3D cloths is the technology, based on real-world sewing patterns to generate clothes, which is the standard way clothing is made in the fashion industry. The method is as follows:

Trace a blueprint, draw a pattern, then map the sewing blueprint image as a texture, apply the fabric to the pattern, then edit texture to fit the blueprint inside the pattern shape, start drawing over the blueprint patterns, but just one point on each sharp corner until you close the pattern. Try to keep the patterns as simple as possible. Arrange the patterns in a way that's easy for you to understand. At this point you will sew the garments the same way that fashion designers do in the real world. There are two types of sewing which allows you to sew from one Segment Line (Edge) to another, and the Free Sewing that allows you to sew freely from any one part of the pattern to another.

Taking measurements helps when modifying the garment patterns to make them fit the character's body which includes measurements for: waist, hip, thigh, knee, calf, ankle, bust, neck base, arm hole, elbow, and wrist etc and then use the Surface Tape Measurement tool to get the pants and shirt lengths.

OBJECT OF THE INVENTION

The object of the invention is to provide realistic and fitting wearing experience to an user virtually.

SUMMARY OF THE INVENTION

According to one embodiment, a method for generating a body model of a person wearing a fit cloth comprising:

- receiving a user input related to a person, wherein the user input comprises either one or more images of the person from which body details can be extracted, or a body related information comprising atleast one of a body measurement information of the person relates to information regarding size of various body parts of the person, a skin tone of the person, a face type of the person, a hairstyle of the person, or a weight or a combination thereof;

- processing the user input to determine an essential body shape /size information which is most suitable based on user input,

- procuring the body model from a body model database or generating in run time, wherein the body model can be in any orientation while being viewed on a display device;

- procuring a fit cloth image from a fit cloth database or generating in run time based on the orientation of the body model, wherein the fit cloth image is generated from processing of a 3D model of the fit cloth and a texture, the texture is prepared by using a cloth image ; - combining the body model of the person and the image of the fit cloth to show the body model of the person wearing the fit cloth,

wherein the body measurement information comprises at least one of height of the person, weight of the person, size of the hips of the person, size of the bust or chest of the person, size of cups, or a shape type of the person’s boy, or combination thereof.

According to another embodiment of the method, wherein the fit cloth is adapted to provide an impression of a lose fitting, a tight fitting, or a perfect fitting while the body model of the person is shown wearing the fit cloth.

According to yet another embodiment of the method, wherein the fit cloth image is is generated in more than one shape/size using the same texture.

According to one embodiment of the method, wherein the fit clothes of different shape/size are adapted to be generated using same set of textures for all shapes/sizes and one or more computer graphics meshes.

According to another embodiment of the method, wherein the fit clothes of different shape/size are adapted to be generated by making same UV map for clothes of different shape/size as a texture while using different computer graphics of the fit cloths of different shape/size.

According to yet another embodiment of the method, wherein the fit cloth is adapted to be generated using one computer graphic cloth or using multiple set of computer graphic cloth pieces, and virtually stitching the pieces into the fit cloth.

According to one embodiment of the method, wherein the image of the body model with the fit cloth is shown in static or dynamic environment or AR/VR or mixed reality-based environment.

According to another embodiment of the method, the method comprising:

- receiving and processing a beautification input, and based on the processing applying a makeup onto a face of the body model, or change a hairstyle of the body model. According to yet another embodiment of the method, the method comprising: processing the image of the person to identify atleast the body measurement information of the person, the skin tone, the face type of the person, the hairstyle of the person, or the weight or combination thereof.

According to one embodiment of the method, the method comprising:

- receiving and processing a fit type input, wherein the fit type input is related to fitting of the cloth on the user which can vary from a tight fit to a loose fit onto the body of the person; and

- based on processing, procuring the fit cloth from a fit cloth database based on the orientation of the body model and the fit type user input, wherein the fit cloth is generated by processing an image of a predefined cloth based on the geometry of the body model; and

- combining the body model of the person and the image of the fit cloth to show the body model of the human wearing the fit cloth, with an appropriate fitness as requested by the person.

According to another embodiment of the method, wherein the predefined image of the cloth is adapted to be collected from a photoshoot of a person or a mannequin.

According to yet another embodiment of the method, wherein the cloth is having a level of transparency, and accordingly the image of the fit cloth is having an appropriate level of transparency, wherein combining the body model of the person and the image of the fit cloth to show the body model of the person wearing the fit cloth with the appropriate level of transparency of the cloth.

According to one embodiment of the method, the method comprising:

- receiving a dress input wherein dress input relates to wearing of multiple clothes, each cloth refers to a different category; and

- procuring more than one fit cloth images relevant to each of the clothes in the category; and

- combining the body model of the person and the images of the fit clothes, in a predefined order, to show the body model of the person wearing the fit clothes in layers. According to another embodiment of the method, wherein procuring the body model in a particular posture, and combining the body model in the particular posture and the image of the fit cloth to show the body model of the person wearing the fit cloth in the particular posture.

According to yet another embodiment of the method, the method comprising:

- receiving and processing a fit suggestion input related to best suitable dress and/or makeup for the person;

- processing the fit suggestion input based on one or more parameters including looks of the user, a body shape or size, amount of money the person want to spend, a geography where the user belongs to, one or more events and timings, a weather related information, an information related to previous buying habits, a forecasting based on mathematical tools, a recommendation from another person; and

- based on processing, making a suggestion either showing one or more dresses and/or makeup, or showing the body model wearing the suggested dresses/es and/or makeup/s.

According to one embodiment of the method, the method is implementable as a trial room on a web portal, or a mobile application, or a desktop application.

According to another embodiment of the method, wherein the trial room comprises a virtual mirror, and a reflection the body model of the person wearing the fit cloth is shown in the virtual mirror.

According to yet another embodiment of the method, wherein the trial room is adapted to run on a separate server.

According to one embodiment of the method, wherein showing the body model of the person wearing the fit cloth along with a color spectrum or texture spectrum over the body model wearing the fit cloth, which is representative of a tightening or loosing of the fabric of the fit cloth on body. According to another embodiment of the method, the method comprising, receiving and processing a sharing input for sharing the body model of the person wearing the fit cloth onto a social media platform, accordingly sending and posting the body model of the person wearing the fit cloth onto the social media platform.

BRIEF DESCRIPTION OF THE INVENTION

FIGS. 1 (a) and 1(b) illustrates choice of body model for different body shape and size of user.

FIGS. 2(a) and 2(b) illustrates selection of cloth to be worn by user body model and fitting the cloth on user body model.

FIGS. 3(a)-(g) illustrates wearing of other cloths, accessories and makeup. For an example virtual wearing of spectacles, makeup, more cloths and shoes by user on his/her body model.

FIG 4 illustrates wearing of a dress by user body model in different angles and orientation.

FIG. 5(a)-(d) illustrates the body model of girl wearing non-transparent cloth, semi-transparent cloth and effect of sun & dark on cloth

FIG 6(a)-(c) illustrates the body model of girl wearing right fit, tight fit and loose fit t-shirt and FIG. 6(a')-(c') illustrates the tightness of cloths in different fit by color spectrum for virtually understanding of the tightness of cloth on body.

FIG 7(a)-(c) illustrates the body model with different hair styling options.

FIG 8(a)-(e) illustrates the body model in different background, and in one embodiment with a background having a mirror.

FIG 9 illustrates a display showing different selections for providing inputs to generate the body model. FIG 10(a)-(e) illustrating availability and working of the implementation of the invention on various types of devices.

FIGS. 11(a) and 11(b) illustrates concept for changing shape and size of an image

FIG. 12(a)-(d) illustrates method for changing shape of a virtual t-shirt in one embodimentA

FIG 13 illustrates a block diagram of the system implementing the invention.

FIG 14(a) - FIG 14(b) illustrates a block diagram of another embodiment of the

system implementing the invention.

DETAILED DESCRIPTION

For the purpose of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the figures and specific language will be used to describe them. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Such alterations and further modifications in the illustrated system, and such further applications of the principles of the invention as would normally occur to those skilled in the art are to be construed as being within the scope of the present invention.

It will be understood by those skilled in the art that the foregoing general description and the following detailed description are exemplary and explanatory of the invention and are not intended to be restrictive thereof.

The terms "comprises", "comprising", or any other variations thereof, are intended to cover a non-exclusive inclusion, such that a process or method that comprises a list of steps does not include only those steps but may include other steps not expressly listed or inherent to such a process or method. Similarly, one or more sub-systems or elements or structures or components preceded by "comprises... a" does not, without more constraints, preclude the existence of other, sub-systems, elements, structures, components, additional sub-systems, additional elements, additional structures or additional components. Appearances of the phrase "in an embodiment", "in another embodiment" and similar language throughout this specification may, but not necessarily do, all refer to the same embodiment.

Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the art to which this invention belongs. The system, methods, and examples provided herein are only illustrative and not intended to be limiting.

As mentioned in the Background regarding challenges in generating the body model/avatar of user in all shape & sizes. We re resolving these challenges by generating the body model in following ways, which can be used alternatively or in combinations:

1. Scanning/ Photo shooting the people of different body shape & sizes and further use the photo/s in different orientation (360-degree view) or uses the computer graphics model.

2. The other way is to generate the computer graphics model/ avatar is to make different parts of body in advance such as legs, hands, torso, bust or may vary few shapes & sizes. By adding these parts or varying the shape/sizes helps to generate different virtual models as per the user input in run time or procured from database.

3. The challenges in generating body model in all shape & sizes is that the one set of images can't make the body model of all shapes & sizes as numbers of nodes, edges & planes while making the computer graphics model will be more for fat person in comparison of slim person of same height. The other challenge is that the same UV map can't be used for all 3D models so it will be difficult to generate all models in run time from one computer graphics model. In this third way, Artificial Intelligence is used to add/remove nodes, edges & planes in computer graphics model as per user input and further chooses the related texture and generate the model in run time or shall be stored in a database for future use. Also, it is possible to make a computer graphics model which has enough number of nodes, edges or planes which can construct all shapes and sizes and use one / different UV set of textures to generate the user body models.

Additionally, as mentioned in the Background regarding the challenges in generating the virtual cloths for body model/ avatar of user in all shape & sizes, such virtual clothes can be generated in one of the following ways, or combination thereof:

1. One brute force method is to photograph women of different sizes wearing cloths. These images can be used, as it is, for virtual body models. Also, the images of cloths can be warped to produce more sizes. In place of real women, mannequin of different shapes & sizes can also be used and the mannequins shall be made to wear the cloths and further photoshoot shall be carried out. Such photoshoots can be made automated or manual.

2. Other way of making 3D cloth is by virtual sewing and parting off. A photo shoot of the texture cam be carried out and then the virtual cloth shall be made in parts and further virtual sewing parts shall be carried out. Further the parts of cloth can be scaled to become suitable for different sizes of cloths. This is not a commercial method as it takes time and money to make the cloths in this fashion. This is manual way. Also, all cloths cannot be made in this way with clarity. Another way is to make the mesh of static cloth and map the texture on it. This way is also not efficient as to make the same cloth for different size of user, as nodes, edges or plane needs to be added/removed, which is a tedious job.

3. The third way, is to use the texture obtained by model shoot/ mannequin shoot or just cloth shoot, and use UV map to generate cloth of different sizes. Cloths for different sizes may have different number of nodes, edges, planes, mesh but can make the same UV map to generate cloths of different sizes efficiently. Also, Artificial Intelligence can be further used to change the size of mesh of 3d cloth and the physic can be further applied as per different shapes and sizes to generate the cloths for every human avatar from one or limited number of photoshoots in run time or in advanced. As earlier method will fail to make the virtual cloths for all shapes and size or will be too time consuming to work with. Embodiments of the present invention will be described below in detail with reference to the accompanying figures.

FIG. 1(a) shows an image 101 showing body model 103 in one particular size. FIG. 1(b) shows another image 102 showing body model 104 which is different in shape and size with respect to the body model 103. Based on user input, the body models are automatically generated, and body models 103, 104 are exemplary body models which are generated based on different user inputs. These body models 103, 104, either shall be generated in real time on receiving the user inputs, or shall be fetched from a database storing body models based on user input. Each of the body models which are stored in the database shall be mapped to different shapes and sizes, where the shapes and sizes in the mapping may be having continuous range for each body model or shall have discreet values. These body models can be used to generate virtual body models with user face in different shapes and/or sizes and thereafter images shall be generated showing the generated virtual models wearing of cloths and accessories.

FIG. 1(c) and FIG. 1(d) shows image 105 showing body model 106, and image 107 showing body model 108 which are different in skin tone with respect to each other, and with respect to the skin tone of the body model 103. Based on user input, the body models are automatically generated, and body models 106, and 108 are exemplary body models which are generated based on different user inputs.

FIG. 2(a) shows dress 201 which is system generated and which suits fit for body model. The dress 201 is processed onto the body model 202, so that the dress can be shown fittingly worn by the virtual body model in FIG. 2(b).

FIG. 3(a)-(g) illustrates virtual model 304 wearing of other cloths, accessories and makeup.

These illustrations exemplify, virtual wearing of spectacles, makeup, more cloths and shoes by user on his/her virtual model. FIG. 3(a) shows image 302 having the virtual model 304 with dress 303. For virtual wearing of spectacles, the image 302 is processed with an image of spectacles to generate image 306, as shown in FIG. 3(b), where spectacles 307 are shown being worn by the virtual model 304. To exemplify, wearing of the lipstick on the lips, the image 306 is processed to show lipistick being applied onto the virtual model 304 by changing lips color/contrast/hue/brightness or adding image/s or other property to show makeup at lips 309, and to accordingly generate image 308, as shown in FIG. 3(c). Image 310 shows lips 309 in zoom.

To exemplify, wearing of clothes in layers, image 308 is further processed using an image 313 of another cloth to generate an image 311 of the virtual model wearing another cloth 313 in layers onto the cloth 303 which the virtual model was already wearing.

To further exemplify wearing of shoes by the virtual model, the image 311 is further processed with an image of shoes 314, and to generate an image 312 having the virtual model 304 wearing the shoes 314.

To further exemplify a posture being carried by the virtual body model, the image 312 is processed and an image 315 is generated where the virtual body model 304 is carrying the posture 316.

To further exemplify carrying of a bag by the virtual body model, the image 312 is further processed with an image of a bag 317, and to generate an image 318 having the virtual body model 304 carrying the has 317.

FIG. 4 illustrates wearing of a dress by user body model in different angles and orientation. 401 is produced by processing the user body model in front position and processing the cloths in front position. While 402 is produced by processing the person's body model in different orientation and processing the cloths in same orientation and so on for 403-406.

FIG. 5(a)-(d) illustrates the body model 501 of girl wearing non-transparent cloth 502, semi transparent cloth 503 and effect of sun 504 & dark 505 on cloth 502, 503 by changing an ambient lighting condition in which the combined body model with the image of cloth is shown by processing the combined body model with the image of cloth using the ambient lighting input by changing at least one of color, contrast, brightness, saturation of the combined body model of the person with the image of the cloth.

FIG. 6(a)-(c) illustrates the body model of girl wearing right fit, tight fit and loose fit t-shirt. This is the case when user wants to try different fit on her body model. The cloths of different fit are used to produces such results. In one embodiment cloth of one fit can be processed to produce a cloth of other fit and then shown with the body model of user. FIG. 6(a’)-(c’) illustrates the tightness of cloths in different fit by color spectrum for virtually understanding of the tightness of cloth on body. The system uses logic based on the information of cloth specification and user body information to estimate the normal, tight or loose fit and shows the stretch of fabric by different colors. Such images of different spectrum may be prepared in real time or may be stored in data base based on different combination of body and cloths.

FIG 7(a) shows a user interface showing an image 705 of a body model 701 along with a selection of hairstyles having a first hair styling option 702, a second hair styling option 704, and a third hair styling option 705. When a user selects the first hair styling option 702, the hair style is processed onto the body model 701 to generate an image 706 of the body model wearing the first hair style, as shown in FIG 7(b). Similarly, when the user selects the third hair styling option 705, the hair style is processed onto the body model 701 to generate an image 707 of the body model wearing the third hair style, as shown in FIG. 7(c).

FIG 8(a)-(c) shows a body model 801 in different back grounds 806, 807, 808. When a user selects a particular background from a selection of backgrounds shown on an user interface, the body model is processed with the relevant background to generate an image 802, 803, 804, with the specific background selected. The background can be static, like an image, or dynamic, like a video.

FIG 8(d)-(e) shows the body model 801, with its reflection in the mirror in the background.

When the user selects an option to check the reflection of the body model wearing dress in a mirror, the body model is processed to generate an image 809, 810 to show the body model with the mirror 811 in the background with reflection 812 of the body model 801 in the mirror 812. The images 809, and 810 are representative of parallex effect generated when an user input is received for creating such an effect. This option helps the user to check the reflection 812 at various reflective angles of the mirror 811.

FIG 9 illustrates an embodiment, where a user interface 901 is displayed which has a selection type input 902 to receive inputs for height, weight, bust, waist, and hips. A user can further choose a body shape 903, such as apple, pear, inverted triangle, etc. The user can also select the face, hair style based on choice of face shape, and origin or ethnicities 904, like Asian, European, African, etc. Once the inputs are received, the display show a relevant body model of user. The user can try one or more clothes, accessories, makeup on the body model displayed.

FIG 10 (a)-(b) shows an embodiment, where a virtual try on feature is shown on a website. It can run on the same server, as on an ecommerce website, or can be plugged into a fashion ecommerce site running separate server, and trial room can be connected through application program interface, or other means, and run on separate server. Once a user clicks on an image 1002 of a dress shown on the user interface 1001, a virtual model 1003 shall be shown wearing the same dress selected by the user. The user can try different dresses, makeup, accessories, and standing posture through the trial room menu. The user can also edit their body model by making inputs as shown in FIG 9, and/or by uploading their photo. The user can also save their body model for further use. The user can also post it on a social networking platform where the user can get feedback from friends. The user can also see the body model in different angles, and orientation, for example front, back side, and so on.

FIG 10(c)-(d) show a virtual try on feature on a mobile phone. Similar features which can be carried out on website, can also be carried on current implementation on the mobile phone

FIG 10(e) is showing a virtual try on feature on a big display 1004 suited for physical showrooms, where a user 1005 is shown selecting a dress 1007 using a hand gesture 1006. Once the selection is made, the body model is processed to show the dress 1007 being worn by the body model 1003. FIG. 11(a) shows an image 1102 having a ring shape 1101. Various nodes 1103 are shown on the image 1102 which after connecting draws an imaginary net on the ring 1101 and shows the complete ring to be divided in different imaginary pieces. FIG. 11(b) shows the warping of ring 1101 whereas warping means that points are mapped to points. This can be based mathematically on any function from (part of) the plane to the plane. If the function is injective the original can be reconstructed. If the function is a bisection any image can be inversely transformed. After warping, the shape of the ring 1101 is changed. It is evident that in the new shape of the ring 1101, position of the nodes 1103 are changed and, so as shape of the lines connecting these nodes 1103. This has led to substantial change in the shape of the ring 1102.

FIG. 12(a)-(d) illustrates method for changing shape of a virtual t-shirt in one embodiment. The initial image of t-shirt as shown in FIG. 12(a) is suited for one type of body model. If we are to use the cloth image for a body model who have bigger bust size and to achieve the image as shown in FIG. 12(d). The computer graphics model of t-shirt as shown in FIG. 12(b) is prepared. Then we make the unwrap of computer graphics model of t-shirt as shown in FIG. 12 (c). We match the initial image of cloth on UV unwrap and render the computer graphics model to generate the resulted t-shirt with bigger bust size. This way we don’t need to photo-shoot a t-shirt of different women of different bust size

FIG 13 is a simplified block diagram showing some of the components of an example client device 1612. By way of example and without limitation, client device is a computer equipped with one or more wireless or wired communication interfaces.

As shown in FIG 13, client device 1612 may include a communication interface 1602, a user interface 1603, a processor 1604, and data storage 1605, all of which may be communicatively linked together by a system bus, network, or other connection mechanism.

Communication interface 1602 functions to allow client device 1612 to communicate with other devices, access networks, and/or transport networks. Thus, communication interface 1602 may facilitate circuit-switched and/or packet- switched communication, such as POTS communication and/or IP or other packetized communication. For instance, communication interface 1602 may include a chipset and antenna arranged for wireless communication with a radio access network or an access point. Also, communication interface 1602 may take the form of a wireline interface, such as an Ethernet, Token Ring, or USB port. Communication interface 1602 may also take the form of a wireless interface, such as a Wifi, BLUETOOTH®, global positioning system (GPS), or wide-area wireless interface (e.g., WiMAX or LTE). However, other forms of physical layer interfaces and other types of standard or proprietary

communication protocols may be used over communication interface 102 Furthermore, communication interface 1502 may comprise multiple physical communication interfaces (e.g., a Wifi interface, a BLUETOOTH® interface, and a wide-area wireless interface).

User interface 1603 may function to allow client device 1612 to interact with a human or non human user, such as to receive input from a user and to provide output to the user. Thus, user interface 1603 may include input components such as a keypad, keyboard, touch- sensitive or presence-sensitive panel, computer mouse, joystick, microphone, still camera and/or video camera, gesture sensor, tactile based input device. The input component also includes a pointing device such as mouse; a gesture guided input or eye movement or voice command captured by a sensor , an infrared-based sensor; a touch input; input received by changing the

positioning/orientation of accelerometer and/or gyroscope and/or magnetometer attached with wearable display or with mobile devices or with moving display; or a command to a virtual assistant.

User interface 1603 may also include one or more output components such as a cut to shape display screen illuminating by projector or by itself for displaying objects, cut to shape display screen illuminating by projector or by itself for displaying virtual assistant.

User interface 1603 may also be configured to generate audible output(s), via a speaker, speaker jack, audio output port, audio output device, earphones, and/or other similar devices, now known or later developed. In some embodiments, user interface 1603 may include software, circuitry, or another form of logic that can transmit data to and/ or receive data from external user input/output devices. Additionally or alternatively, client device 112 may support remote access from another device, via communication interface 1602 or via another physical interface.

Processor 1604 may comprise one or more general-purpose processors (e.g., microprocessors) and/or one or more special purpose processors (e.g., DSPs, CPUs, FPUs, network processors, or ASICs).

Data storage 1605 may include one or more volatile and/or non-volatile storage components, such as magnetic, optical, flash, or organic storage, and may be integrated in whole or in part with processor 1604. Data storage 1605 may include removable and/or non-removable components.

In general, processor 1604 may be capable of executing program instructions 1607 (e.g., compiled or non-compiled program logic and/or machine code) stored in data storage 1505 to carry out the various functions described herein. Therefore, data storage 1605 may include a non- transitory computer-readable medium, having stored thereon program instructions that, upon execution by client device 1612, cause client device 1612 to carry out any of the methods, processes, or functions disclosed in this specification and/or the accompanying drawings. The execution of program instructions 1607 by processor 1604 may result in processor 1604 using data 1606.

By way of example, program instructions 1607 may include an operating system 1611 (e.g., an operating system kernel, device driver(s), and/or other modules) and one or more application programs 1610 installed on client device 1612 Similarly, data 1606 may include operating system data 1609 and application data 1608. Operating system data 1609 may be accessible primarily to operating system 1611, and application data 1608 may be accessible primarily to one or more of application programs 1610. Application data 1608 may be arranged in a file system that is visible to or hidden from a user of client device 1612.

Application Data 1608 includes Image data that includes image/s or photograph/s of other human body, Image/s or cloths/accessories, Image of background and images to producing shades and/or Trained Model data that includes the trained model to produced facial

features/expression/animation and/or user information that includes information about human body information which is either provided by user as user input or generated by processing the user input comprises user image/s it can be used for next time when user is identify by some kind of login identity ,then user will not require to generate the user body model again but can retrieve it from user data and try cloths on it and/or user data which includes generated user body after processing the user image that can be used next time and /or graphics data which includes user body part/s in graphics with rig which can be given animation which on processing with user face produces a user body model with cloths and it can show animation or body part movements wherein human body information comprises atleast one of orientation of face of the person in the image of the person, orientation of body of the person in the image of the person, skin tone of the person, type of body part/s shown in the image of person, location and geometry of one or more body parts in image of the person, body /body parts shape , size of the person, weight of the person, height of the person, facial feature information, or nearby portion of facial features, or combination thereof, wherein facial feature information comprises at least one of shape or location of atleast face, eyes, chin, neck, lips, nose, or ear, or combination thereof. It includes the computer graphics data of cloths, human, texture data, computer logic to produce virtual cloths or user body wearing cloths image/animation or 3d model.

In one embodiment as shown in FIG 14(a), to produce a virtual user’s body model wearing cloth/s, one brute force method is to photograph women/ men of different shape/sizes and also cloths worn by them. We can use these images of cloths to produce more sizes in place of real person, we can also use the mannequin of different shapes & sizes and make them wear the cloths and photo shoot. Photo-shoot can be made automated or manual. We can photo-shoot the real person from different angels/orientations we can photo-shoot the dress wear by mannequin of same shape as real person from different angel/orientation. We can use the database of real human images and dresses of different shape/size from different angle/orientation.

While user input the shape/size then a size engine (logic) on processing select the most appropriate image of user’s body model. User can input his/her photo also, a trained computer model can process and detect the body measurement by processing the image/s and then further processed by size engine to decide the body model. A dress selection logic will input for dress selection from user and also select the dress based on user’s body model. Image processing unit, combine the body model of the person and the image of the cloth to show the body model of the human wearing the cloth;

The user input information may include, skin tone of the person, type of body part/s, geometry of one or more body parts of the person, body/body parts shape, size of the person, weight of the person, height of the person, facial feature information or combination thereof,

The Image Processing Libraries 133 includes library for facial feature extraction, Face detection, body part detection, expression generation/animation, image merging/blending, Body parts feature extraction, the Image Processing Engine 131, Image data 132 includes body model, virtual cloths/makeup data , 132b contains the computer model for size engine (logic), dress selection logic, trained model data, user info data 132d contains user’s input data for future use, try-on history, friends list and other info, user data 132c includes size recommendation engine, AI based fashion advisor and other features related data and logic, cloths and body model properties , image processing libraries 133 to generate the output 135, as per user input 137.

The image processing engine uses database to produce user body model with cloths while in another embodiment the user body model is produced at 131 and cloth image is merged to make a single image at client device or put in a layer with user body model to looks like user model wearing cloths. While in another embodiment user model with cloths is generated with different facial expression and or with different body posture or with animation of facial and/or body part/s movement.

Virtual dress is made by virtual sewing. 3D cloth is made by making its part in patches virtually and then virtually sewing parts by part and making virtual 3d dress. This can be wear by virtual 3d model and apply physics to address the flow of fabric. The real dress can be photo-shoot at different parts and parts of virtual dress can be textured using these images. We can scale up parts to become suitable for different sizes of cloths for different shape/size human. This is also not very efficient as we need to work on each cloth manually.

Another way of making virtual cloth is to make the mesh of static cloth and map the texture on it.

This way is also not efficient as to make the same cloth for different size of user, we need to add/remove nodes, edges or planes for making virtual dress for different shape and size.

In another embodiment virtual dress for all shape/size can be generated if same UV map of texture work for virtual cloths of different shape/size having different number of edges, nodes, planes. A single texture of dress can be used to generate the virtual dress of all shapes/sizes also 3d cloths of different shape/size can be generated using single/ very few cloths texture images if a 3d cloths is made in a way which can be shaped in computer graphics cloth of any shape and size.

In another embodiment, we can also use artificial intelligence to change the size of mesh of 3d cloth and apply the physics as per different shapes and sizes to generate the cloths for every human avatar from one or limited number of photo-shoots of cloth in run time or in advanced.

We can render the virtual cloths in advance to collect the cloths images for different human in advance or can generate the images in run time by rendering the 3d model of cloth. We can also simulate the 3d model of cloth with 3d model of virtual human and render it.

To generate the virtual human of all shape and size, the brute force method is same as to collect the photo-shoot of wide range of human in different body shape/size. Else single computer graphics model / set of computer graphics model can be used to make body model image/ 3d model in run time or in advance in similar way as virtual cloths are generated and textured. 3d model can also be generated by adding different parts and making all combinations. In one embodiment as shown in FIG 14(b), the Image Processing Engine 131, Image data 132 includes body model, virtual cloths/makeup data or texture data of cloths/ human in case of computer graphics model of cloths/human, 132b contains the computer model for size engine (logic), dress selection logic, trained model data, user info data 132d contains user’s input data for future use, try-on history, friends list and other info, user data 132c includes size recommendation engine, AI based fashion advisor and other features related data and logic, cloths and body model properties, 132c contains the graphics data related to virtual cloths and/or virtual human, Image processing libraries 133 which include library for facial feature extraction, Face detection, body part detection, expression generation/animation, image merging/blending, Body parts feature extraction, computer graphic libraries to skinning and rigging the computer graphics model to animate the part/s, to generate the output 135, as per user input 137.

Rendering engine 134, renders the computer graphic data related to virtual cloth /human and image processing engine companies the images in order

In other case the image processing engine can process the texture related to body model/ cloth, so 131 & 134 can work in order one after other in both ways say first 131 and the 134 or first 134 or then 131 or in parallel or skip 131 or generate output by 134 or can skip 134 and generate output by 131 showing 3 d model of body model and cloth and simulate in run time and produce the output as image or animation.

The display system can be a wearable display or a non-wearable display or combination thereof.

The non-wearable display includes electronic visual displays such as LCD, LED, Plasma, OLED, video wall, box shaped display or display made of more than one electronic visual display or projector based or combination thereof. The non-wearable display also includes a pepper's ghost based display with one or more faces made up of transparent inclined foil/screen illuminated by projector/s and/or electronic display/s wherein projector and/or electronic display showing different image of same virtual object rendered with different camera angle at different faces of pepper's ghost based display giving an illusion of a virtual object placed at one places whose different sides are viewable through different face of display based on pepper's ghost technology.

The wearable display includes head mounted display. The head mount display includes either one or two small displays with lenses and semi-transparent mirrors embedded in a helmet, eyeglasses or visor. The display units are miniaturised and may include CRT, LCDs, Liquid crystal on silicon (LCos), or OLED or multiple micro-displays to increase total resolution and field of view.

The head mounted display also includes a see-through head mount display or optical head-mounted display with one or two display for one or both eyes which further comprises curved mirror-based display or waveguide-based display. See through head mount display are transparent or semitransparent display which shows the 3d model in front of user’s eye/s while user can also see the environment around him as well. The head mounted display also includes video see through head mount display or immersive head mount display for fully 3D viewing of the user body model with cloths by feeding rendering of same view with two slightly different perspective to make a complete 3D viewing of the user body model with cloths. Immersive head mount display shows user body model with cloths in virtual environment which is immersive.

In another embodiment, the user may want to apply makeup which can be achieved by combining image representing makeup on user’s body model else using facial feature detection method as suggested follows to apply makeup. In some cases, user may want to change the standing posture of body model / after wearing of different cloths/makeup or make want to animate the body model wearing cloths where we use 3D body model and 3D cloth simulate over it and sequence of frames are generated. Following shows the methods/ techniques to achieve this.

There Exist Various Methods for Face detection which are based on either of skin tone-based segmentation, feature based detection, template matching or Neural Network based detection.

For example; Seminal work of Viola Jones based on Haar features is generally used in many face detection libraries for quick face detection.

Haar Feature is defined as follows:

Lets consider a term "Integral image" which is similar to the summed area table and contains entries for each location such that entry on (x, y) location is the sum of all pixel values above and left to this location.

where ii(x, y) is the integral image and i(x, y) is original image.

Integral image allows the features (in this method Haar-like-features are used) used by this detector to be computed very quickly. The sum of the pixels which he within the white rectangles are subtracted from the sum of pixels in the grey rectangles. Using integral image, only six array reference are needed to compute two rectangle features, eight array references for three rectangle features etc. which let features to be computed in constant time O (1).

After extracting Feature, the learning algorithm is used to select a small number of critical visual features from a very large set of potential features Such Methods use only few important features from large set of features after learning result using learning algorithm and cascading of classifiers make this real time face detection system.

Neural Network based face detection algorithms can be used which leverage the high capacity of convolution networks for classification and feature extraction to learn a single classifier for detecting faces from multiple views and positions. To obtain the final face detector, a Sliding window approach is used because it has less complexity and is independent of extra modules such as selective search. First, the fully connected layers are converted into convolution layers by reshaping layer parameters. This made it possible to efficiently run the Convolution Neural Network on images of any size and obtain a heat-map of the face classifier.

Once we have a detected the face, the next is to find the location of different facial features (e.g. corners of the eyes, eyebrows, and the mouth, the tip of the nose, etc.) accurately.

For an Example; To precisely estimate the position of facial landmarks in a computationally efficient way, one can use dlib library to extract facial features or landmark points.

Some methods are based on utilizing a cascade of regressors. The cascade of regressors can be defined as follows; Let x t 6 R 2 be the x, y-coordinates of the ith facial landmark in an image I. Then the vector denotes the coordinates of all the p facial landmarks in I. The vector S represent the shape. Each regressor, in the cascade predicts an update vector from the image. On Learning each regressor in the cascade, feature points estimated at different levels of the cascade are initialized with the mean shape which is centered at the output of a basic Viola & Jones face detector.

Thereafter, extracted feature points can be used in expression analysis and generation of geometry- driven photorealistic facial expression synthesis.

For applying makeup on lips, one need to identify lips region in face. For this, after getting facial feature points, a smooth Bezier curve is obtained which captures almost whole lip region in input image. Also, Lip detection can be achieved by color-based segmentation methods based on color information whereas facial feature detection methods give some facial feature points (x,y coordinates) in all cases invariant to different light, illumination, race and face pose. These points cover lip region. However, drawing smart Bezier curves will capture the whole region of lips using facial feature points.

Generally Various Human skin tone lies in a particular range of hue and saturation in HSB color space (Hue, Saturation, Brightness). In most scenario only the brightness part varies for different skin tone in a range of hue and saturation. Under certain lighting conditions, color is orientation invariant. The studies show that in spite of different skin color of the different race, age, sex, this difference is mainly concentrated in brightness and different people's skin color distributions have clustering in the color space removed brightness. In spite of RGB color space, HSV or YCbCr color space is used for skin color-based segmentation.

Merging, Blending or Stitching of images are techniques of combining two or more images in such a way that joining area or seam do not appear in the processed image. A very basic technique of image blending is linear blending to combine or merge two images into one image: A parameter X is used in the joining area (or overlapping region) of both images. Output pixel value in the joining region:

PJoining_Region(i, j) = (1-X) * PFirst lmage (i, j)+ X * PSecond lmage (i, j).

Where 0 < X < 1, remaining region of images are remained unchanged.

Other Techniques such as‘Poisson Image Editing (Perez et al.)’,‘Seamless Stitching Of Images Based On A Haar Wavelet 2d Integration Method (Ioana et al.)’ or‘Alignment and Mosaicing of Non-Overlapping Images (Yair et al.)’ can be used for blending.

For achieving life-like facial animation various techniques are being used now-a day’s which includes performance-c/mvw techniques, statistical appearance models or others. To implement performance-driven techniques approach, feature points are located on the face of an uploaded image provided by user and the displacement of these feature points over time is used either to update the vertex locations of a polygonal model, or are mapped to an underlying muscle-based model.

Given the feature point positions of a facial expression, to compute the corresponding expression image, one possibility would be to use some mechanism such as physical simulation to figure out the geometric deformations for each point on the face, and then render the resulting surface. Given a set of example expressions, one can generate photorealistic facial expressions through convex combination. Let E L = (G L , Ii), i = 0,...., m, be the example expressions where G L represents the geometry and Ii is the texture image. We assume that all the texture images I L are pixel aligned. Let the set of all possible convex combinations of these examples. Then H ί E^. e^ . i £ m ) =

While the statistical appearance models are generated by combining a model of shape variation with a model of texture variation. The texture is defined as the pattern of intensities or colors across an image patch. To build a model, it requires a training set of annotated images where corresponding points have been marked on each example. The main techniques used to apply facial animation to a character includes morph targets animation, bone driven animation, texture-based animation (2D or 3D), and physiological models.