Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FITTING CLOTHING ARTICLES TO HUMAN IMAGES
Document Type and Number:
WIPO Patent Application WO/2018/011649
Kind Code:
A1
Abstract:
A method of purchasing garments via e-commerce, comprising: obtaining a user's body image; selecting by said user at least one displayed garment; and creating a match between said selected garment and said body image, said match indicating how said user wishes to wear said garment.

Inventors:
CUTLER DANIEL (IL)
ZOLOTOV MICHAEL (IL)
Application Number:
PCT/IB2017/053620
Publication Date:
January 18, 2018
Filing Date:
June 19, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MODA-MATCH LTD (IL)
International Classes:
G06T17/00
Domestic Patent References:
WO2016102228A12016-06-30
Foreign References:
US20130308017A12013-11-21
US20130315475A12013-11-28
US20140176565A12014-06-26
Attorney, Agent or Firm:
FOGEL, Ronny (IL)
Download PDF:
Claims:
CLAIMS

1. A method of purchasing garments via e-commerce, comprising:

obtaining a user's body image;

selecting by said user at least one displayed garment; and

creating a match between said selected garment and said body image, said match indicating how said user wishes to wear said garment.

2. The method of claim 1 , wherein said creating a match comprises:

selecting by said user landmark points on said at least one selected displayed garment to create a garment contour;

selecting by said user first points matching said selected landmark points on a displayed nominal body image;

displaying said user's body image;

selecting by said user second points of interest on said displayed user's body image;

automatically matching said first and second points to create a body contour; and

automatically matching said garment contour to said body contour.

3. The method of claim 1 , wherein said obtaining a user's body image comprises: taking by said user his own picture using a mobile device's camera, whereby displaying a level on said mobile device's screen;

receiving said user's picture;

extracting contours from said picture;

calculating said user's height in pixels;

calculating the ratio cm:pixels in the picture;

calculating body measurements of said user; and

computing said user's size.

4. The method of claim 2, wherein matching said first and second points to create a body contour comprises using statistical models for iteratively deforming said first points to match said second points.

5. The method of claim 4, wherein said statistical models comprise an Active Shape Model (ASM) and Active Appearance Model (AAM).

6. The method of claim 2, further comprising calculating a sub-contour of said body contour, said sub-contour comprising the body part into which said garment will be warped.

7. The method of claim 2, wherein matching said garment contour to said body

contour comprises:

determining the best linear transformation between each pair of landmark points on said garment contour and each pair of landmark points on said garment body contour; and

applying said best linear transformation to said garment contour points between said two landmarks.

8. The method of claim 7, wherein said best linear transformation is determined according to the sum of squared errors.

9. The method of claim 2, wherein matching said garment contour to said body

contour further comprises creating a triangle mesh from said garment contour.

10. The method of claim 9, wherein said creating a triangle mesh comprises:

defining a nominal mesh edge size;

diluting said garment contour according to said defined mesh edge size;

removing contour points whose distances to their neighbors are less than said defined mesh edge size;

calculating shrunk and expanded contours;

calculating an internal grid, inside of the shrunk contour, where the distances between the grid points are equal to said defined mesh edge size; and

wherein said triangle mesh comprises said diluted contour, said shrunk and expanded contours and said internal grid.

1 1. The method of claim 10, wherein said calculating shrunk and expanded contours comprises calculating the normal to each contour point, and taking the points on both sides of the normal, with a distance equal to said defined mesh edge size. 12. The method of claim 10, further comprising: removing from said mesh points whose closest neighbor is closer than said defined mesh edge size.

13. The method of claim 12, further comprising performing a number of

optimization iterations wherein each point is moved to the average position of its neighbors.

14. The method of claim 10, further comprising deforming said garment mesh to fit said body image, using one of linear and non-linear optimization.

15. The method of claim 14, further comprising using field morphing to warp said garment image to fit said body image.

16. The method of claim 15, further comprising removing background pixels from said warped garment image.

17. The method of claim 16, wherein said removing background pixels comprises: for each first pixel in the warped garment image:

examining neighboring second pixels and determining whether they should be added to the first pixel's region according to a predetermined threshold;

creating a binary mask corresponding to the remaining background pixels; and

rendering said background pixels transparent.

18. The method of claim 17, further comprising blending said body image, said

warped garment image and said binary mask.

19. A system for purchasing garments via e-commerce, comprising:

a system server; and

a plurality of users using electronic communication devices communicating with said system server over the Internet;

said system server storing at least one database of body sizes and postures and garment images comprising garments captured on ghost mannequins;

an administrator module communicating with said system server, said administrator module configured to allow creating a database used for training, both for body images and garment images;

said system sever additionally comprising a user API module configured to serve various front end applications running on said users' electronic communication devices for interactively matching garments to users; said system sever additionally comprising a processing engine configured to perform calculations for building said at least one database and for matching garments to users.

20. The system of claim 19, wherein said user API module further comprises a

graphical user interface (GUI) configured to create a training database for body images and garment images.

21 . The system of claim 20, wherein said processing module is configured to build a statistical model configured to depict changes a shape can take, based on said training database.

22. The system of claim 21 , wherein said GUI is configured to allow each one of said plurality of users to loads input images one at a time, and to manually select contour points on said input images.

23. The system of claim 22, wherein said manually selecting contour points

comprises selecting regular contour points and special landmark points.

24. The system of claim 22, wherein said GUI is further configured to allow each one of said plurality of users to manipulate single contour points in an existing contour.

Description:
FITTING CLOTHING ARTICLES TO HUMAN IMAGES

TECHNOLOGY FIELD The present invention is in the field of fashion purchase via eCommerce.

CROSS-REFERENCE TO RELATED PATENT APPLICATIONS

This patent application claims priority from and is related to U.S. Provisional Patent Application Serial Number 62/362,052, filed 14 July 2016, this U.S. Provisional Patent Application incorporated by reference in its entirety herein.

BACKGROUND Shopping for clothes in physical stores can be an arduous task and, due to travelling and parking, can be very time consuming. With the advent of online shopping, consumers may purchase clothing, while staying home, via a computer or any electronic device connected to the Internet. One difference between purchasing clothes online and purchasing clothes in a store is the lack of a physical dressing room to determine if and how an article of clothing fits the particular consumer. Since different consumers can have different dimensions, seeing how an article of clothing fits, by use of a dressing room, can be a very important aspect of a successful and satisfying shopping experience. SUMMARY

According to an aspect of the present invention there is provided a method of purchasing garments via e-commerce, comprising: obtaining a user's body image; selecting by said user at least one displayed garment; and creating a match between said selected garment and said body image, said match indicating how said user wishes to wear said garment.

Creating a match may comprise: selecting by said user landmark points on said at least one selected displayed garment to create a garment contour; selecting by said user first points matching said selected landmark points on a displayed nominal body image; displaying said user's body image; selecting by said user second points of interest on said displayed user's body image; automatically matching said first and second points to create a body contour; and automatically matching said garment contour to said body contour. Obtaining a user's body image may comprise: taking by said user his own picture using a mobile device's camera, whereby displaying a level on said mobile device's screen; receiving said user's picture; extracting contours from said picture; calculating said user's height in pixels; calculating the ratio cm: pixels in the picture; calculating body measurements of said user; and computing said user's size.

Matching said first and second points to create a body contour may comprise using statistical models for iteratively deforming said first points to match said second points.

The statistical models may comprise an Active Shape Model (ASM) and Active

Appearance Model (AAM). The method may further comprise calculating a sub-contour of said body contour, said sub-contour comprising the body part into which said garment will be warped.

Matching said garment contour to said body contour may comprise: determining the best linear transformation between each pair of landmark points on said garment contour and each pair of landmark points on said garment body contour; and applying said best linear transformation to said garment contour points between said two landmarks.

The best linear transformation may be determined according to the sum of squared errors. Matching said garment contour to said body contour may further comprise creating a triangle mesh from said garment contour.

Creating a triangle mesh may comprise: defining a nominal mesh edge size; diluting said garment contour according to said defined mesh edge size; removing contour points whose distances to their neighbors are less than said defined mesh edge size; calculating shrunk and expanded contours; calculating an internal grid, inside of the shrunk contour, where the distances between the grid points are equal to said defined mesh edge size; and wherein said triangle mesh comprises said diluted contour, said shrunk and expanded contours and said internal grid. Calculating shrunk and expanded contours may comprise calculating the normal to each contour point, and taking the points on both sides of the normal, with a distance equal to said defined mesh edge size.

The method may further comprise: removing from said mesh points whose closest neighbor is closer than said defined mesh edge size. The method may further comprise performing a number of optimization iterations wherein each point is moved to the average position of its neighbors.

The method may further comprise deforming said garment mesh to fit said body image, using one of linear and non-linear optimization.

The method may further comprise using field morphing to warp said garment image to fit said body image.

The method may further comprise removing background pixels from said warped garment image.

Removing background pixels may comprise: for each first pixel in the warped garment image: examining neighboring second pixels and determining whether they should be added to the first pixel's region according to a predetermined threshold; creating a binary mask corresponding to the remaining background pixels; and rendering said background pixels transparent. The method may further comprise blending said body image, said warped garment image and said binary mask.

According to another aspect of the present invention there is provided a system for purchasing garments via e-commerce, comprising: a system server; and a plurality of users using electronic communication devices communicating with said system server over the Internet; said system server storing at least one database of body sizes and postures and garment images comprising garments captured on ghost mannequins; an administrator module communicating with said system server, said administrator module configured to allow creating a database used for training, both for body images and garment images; said system sever additionally comprising a user API module configured to serve various front end applications running on said users' electronic communication devices for interactively matching garments to users; said system sever additionally comprising a processing engine configured to perform calculations for building said at least one database and for matching garments to users. The user API module may further comprise a graphical user interface (GUI) configured to create a training database for body images and garment images.

The processing module may be configured to build a statistical model configured to depict changes a shape can take, based on said training database.

The GUI may be configured to allow each one of said plurality of users to loads input images one at a time, and to manually select contour points on said input images.

The manually selecting contour points may comprise selecting regular contour points and special landmark points.

The GUI may be further configured to allow each one of said plurality of users to manipulate single contour points in an existing contour.

BRIEF DESCRIPTION OF THE DRAWINGS For better understanding of the invention and to show how the same may be carried into effect, reference will now be made, purely by way of example, to the accompanying drawings.

With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present invention only, and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for a

fundamental understanding of the invention, the description taken with the drawings making apparent to those skilled in the art how the several forms of the invention may be embodied in practice. In the accompanying drawings:

Fig. 1 is a schematic drawing of the system according to the present invention;

Fig. 2 is a block diagram showing the various modules of the user API module;

Fig. 3 is a flowchart showing the steps taken by the ASM algorithm to create the ASM model of the database;

Fig. 4 is a flowchart showing the steps taken by the AAM algorithm to create the AAM model of the database;

Fig. 5 is a flowchart showing the steps taken by the model application algorithm for each resolution level;

Fig. 6 is a flowchart showing the steps taken by the PCA algorithm;

Fig. 7 is a flowchart showing the steps taken by the open edges transformation algorithm;

Fig. 8 is a flowchart showing the steps taken by the field morphing algorithm; and Fig. 9 is a flowchart 900 showing the steps taken by the pyramid blending algorithm.

DETAILED DESCRIPTION OF SOME EMBODIMENTS The present invention provides an innovative way of fashion purchase via eCommerce. It allows each user to choose one or more garments and simulate how these garments look on him, thus giving a shopping experience that hasn't existed before.

Fig. 1 is a schematic drawing of the system according to the present invention. System 100 comprises a system server 1 10 and a plurality of user electronic communication devices (only two shown 140, 150) such as smartphones, tablets, PCs etc connected to the Internet.

The system server 1 10 comprises at least one database 130 comprising a plurality of body sizes and postures and a plurality of garment images. System server 1 10 also comprises a user API module configured to serve various front end applications running on the users' devices (e.g. website, plug-in to website, application, software) and enable interactively matching garments to users and creating a database used for training, both for body images and garment images. A processing engine 120 is configured to perform all the necessary calculations for building the database 130 and for matching garments to users.

An administrator module 160 provides the system administrator with a

graphic user interface allowing to create the database used for training, both for body images and garment images. The database is used to build a statistical model of the shape we want to recognize in the photo, in our case - the body and garment contours. This statistical model depicts the changes the shape can take, based on the training examples. Therefore, we won't see distortions that didn't exist in the database itself.

The user interface allows the user to load input images one at a time, and to manually select contour points. It allows to select both regular contour points, and special landmark points, which have valuable meaning in the algorithm, as will be detailed below.

In addition, the user interface allows the manipulation of single contour points in an existing contour - to move the point, remove it, add new points etc.

The process of contour extraction relies on training the algorithm based on a given database.

The user's size is calculated automatically using the following method: 1. The user takes his own picture using his mobile device's camera.

2. An electronic level is displayed to prevent photographing when the device is not essentially at a right angle to the floor.

3. After taking the photograph the user is prompted to enter his height.

4. The contour extraction algorithm (detailed below) is applied to the picture.

5. The user's measurements(e.g. height) in pixels is calculated from the contour extraction algorithm's result.

6. The ratio cm:pixels in the picture is calculated.

7. The user's measures are calculated in pixels and translated to centimeters (e.g. shoulders breadth, chest circumference, waist circumference and hips

circumference).

8. The user's size is computed using any available method such as a sizes table of the specific fashion house, exact sizes of various parts of garments provided by stores, nominal sizes of body parts for various garment sizes etc.

The method naturally requires photographs of the one or more garments to fit the user. The trivial approach to the problem is to use the common photographs of models wearing the garments that are accessible on the Internet. However, assessing the garment posture when it is worn by a model and altering its posture to fit the user is either algorithmically nearly impossible, or takes massive amounts of development time. The reason is that in order to alter the garment posture, the garment's folds and wrinkles should be found, smoothed, and new ones need to be simulated.

The method of the present invention uses photographs of garments that were taken on ghost mannequins. This way, the garment's original posture is known and its folds and wrinkles can be left unchanged, as they look natural on the target posture of the garment.

Garment-to body matching

Because of the large variations in garment forms, shapes and formations, a smart method is needed to match between the body and each garment. The method should be on the one hand generic enough to fit all garment sorts, and on the other hand to be specific enough to give complete information for each garment and body of the way the garment should be worn on the body.

In order to achieve that, a smart matching method was formed. The matching method consists of several stages:

- Garment landmark selection: A graphic user interface is displayed to the user, allowing him to select landmark points on the garment.

- Presenting a "nominal" body image: One of the body images is selected to be the "nominal" body image. This image is presented to the user.

- Garment-to-body matching: For each selected garment landmark point, the user specifies the matching coordinate on the body image.

The matching specified above defines how the given garment should be matched to the "nominal" body image. However, we need to be able to match it to all body images.

- Smart body contour correspondence: A normalization method is needed for different body contours in order to be able to transform the matching to the "nominal" body image to a matching to the user image.

This is done via extraction of body points of interest, in the contour extraction step described below. Examples for those points are the tips of the hands, the shoulders, the armpits, etc. The number of body points of interest is equal in all body images. Then, the contour is resampled evenly between those points of interest, resulting in contours with equal number of points. That way, points having the same index in different body contours depict the same body locations, e.g. the elbows, the middle points between the armpit, the waist, etc. Therefore, if some garment landmark is matched to the 35th point in the normalized body contour, for example, matching it to the 35th point in the new user body contour will yield a perfect match.

Body and garment database

A graphic user interface allows creating an image database that is used for training, both for body images and garment images.

The interface loads the user's input images one at a time, and allows the user to manually select the contour points as described in the method above. In addition, the interface allows the manipulation of a single contour point in an existing contour - to move the point, remove it, add new points etc.

Fig. 2 is a block diagram showing the various modules of the processiong engine 120, as will be detailed below.

Contour extraction 200

The process relies on training the algorithm based on a given database.

Both the ASM and AAM algorithms were implemented in order to extract the contour. 1 . Active Shape Models (ASMs) 210 are statistical models of the shape of objects which iteratively deform to fit to an example of the object in a new image.

We use the database to build a statistical model of the shape we want to recognize in the photo, in our case - the body contour. The statistical model is created by compressing the body contour database using the PCA algorithm.

This statistical model depicts the changes the shape can take, based on the training examples. Therefore, we won't see distortions that didn't exist in the database itself. The ASM works by alternating the following steps:

- Generate a suggested shape by looking in the image around each point for a better position for the point.

- Conform the suggested shape to the point distribution model, commonly called a "shape model" in this context.

Fig. 3 is a flowchart 300 showing the steps taken by the ASM algorithm to create the ASM model of the database.

In step 310 we load a database of contours that were manually created.

In step 320 we align the data using Procrustes Analysis algorithm.

In step 330 we concatenate the aligned contour points of each image in the database, creating a data set.

In step 340 we create a statistical model out of this data set using the PCA (Principal component analysis) algorithm described below. The PCA algorithm's output is the data set's eigenvalues and eigenvectors. In step 350 we take the eigenvectors that account for 99% of the eigenvalues' energy.

In step 360 we save the PCA statistical model we created.

In step 370 we calculate the aligned mean shape of the data set.

In step 380 we calculate the transformation of the mean shape that will allow us to reverse the alignment later.

2. An Active Appearance Model (AAM) 215 is a computer vision algorithm for matching a statistical model of object shape and appearance to a new image.

The algorithm uses the difference between the current estimate of appearance and the target image to drive an optimization process. By taking advantage of the least squares techniques, it can match to new images very swiftly.

The appearance statistical model in our case is created using the following steps: i. The body image derivative is calculated. The exact calculation method is unique and is described below.

ii. For each contour point, we extract the image derivative profiles in the normal direction of each contour point. We define the profile length as a parameter of the algorithm.

iii. The extracted profiles are concatenated to form a data set.

iv. We apply the PCA algorithm on this data set to get its eigenvectors and

eigenvalues.

v. We take the eigenvectors that account for a certain percentage of the

eigenvalues energy. This percentage is a parameter of the algorithm.

vi. We save the PCA statistical model that we calculated.

Fig. 4 is a flowchart 400 showing the steps taken by the AAM algorithm to create the AAM model of the database.

In step 410 we load a database of contours that were manually created. In step 420 we calculate the normal directions of each contour point. Those are the directions that are perpendicular to the contour in each point.

In step 430 we extract the image profiles in the given direction in each contour point. We define the profile length as a parameter of the algorithm. For example

profileLength=8 pixels.

The profiles contain this number of pixels of every RGB component of the image, in each side of the contour point.

Eventually, the total of pixels extracted for each contour point equals:

(2 -profileLength +1) -n =51 pixels

Where

profileLength +1 = The number of pixels on each side of the contour point + the contour point itself; and

n = number of RGB components. For rofileLength=8 pixels and n=3 we get 51 pixels extracted for each contour point.

In step 440 the extracted profiles are concatenated to form a data set.

In step 450 we apply the PCA algorithm on this data set to get its eigenvectors and eigenvalues.

In step 460 we take the eigenvectors that account for a certain percentage (e.g. 99%) of the eigenvalues energy.

In step 470 we save the PCA statistical model we received. We create a pyramid of every one of the training images. The pyramid consists of images scaled in powers of 2: 1 , 0.5, 0.25, etc. We apply the algorithm depicted above to every image in the pyramid, so we receive a PCA model for every scale.

The model (both ASM and AAM) application is done in an iterative manner. It is performed on the image pyramid that is created from the input image. At first, a relatively broad search is performed on a large area of the low resolution image, and as we go up the resolution scale, the search area is reduced, while the accuracy grows. Fig. 5 is a flowchart 500 showing the steps taken by the model application algorithm for each resolution level:

In step 510 we calculate the normal directions of every contour point.

In step 520 we extract the contour point profiles in the normal directions, for every contour point. The profile lengths are greater than the regular profiles we used for training.

We define a constant: searchLength=10 pixels.

The length of every profile is:

num.0 f RGB Components · (2 · (profileLength+searchLength) +1)=

3- (2- (8+10)+l)=lll pixels

In step 530, for every contour point, we iterate through all the possible shifts of the profile within the search range. For every contour point, for every shift in the search range, we calculate the appropriate score.

Let λ be the eigenvalues. Let A be the matrix of eigenvectors. Let μ be the mean vector of the data. Let x be the current profile. Then: normalizedProfile=A- (x—u [X

score=∑[normalizedProfile(i)]2ni=i

where

normalizedProfil(i) is the i'th pixel in the normalized profile.

In step 540, for every contour point, we choose the shift which minimizes this score. We then move every contour point according to this shift.

In step 550 we use the ASM model which was previously created in order to make sure that the new contour still corresponds to the shape statistical model. We project the new contour to the new basis created by the subset of chosen eigenvectors (step 560) using the ASM's PCA model. We then check (step 570) that each coordinate is found in a range of 3 standard deviations of the mean (99.7% of the data, assuming a normal distribution). If not, the coordinate is readjusted to the range border. Then, we project the shape back to the regular coordinates (step 580) by calculating the inverse transformation of step 560. This can be done in several ways, including:

. Calculating the Moore-Penrose pseudoinverse matrix

Using the transpose of the matrix of step 560. Because the matrix is orthogonal, it satisfies P T = P "1

We iterate until convergence, using the shape of the last step as input to the first step of the next iteration.

PCA

Principal Component Analysis (PCA) is an algorithm which creates an orthogonal transformation that transforms a set of observations to linearly uncorrelated variables. The input variables might obviously be correlated.

The output variables are called the principal components. The first principal component has the largest variance, the second has the second highest, etc. The principal components are all orthogonal, thus forming a set of basis vectors.

The algorithm's input is a matrix, whose columns are the data vectors (feature vectors) of the training set examples.

Fig. 6 is a flowchart 600 showing the steps taken by the PCA algorithm:

In step 610 we calculate the mean data vector of the data set.

In step 620 we subtract the mean data vector from all the data vectors.

In step 630 we perform eigendecomposition of the matrix using the Singular Value

Decomposition (SVD) algorithm.

Let X be data. This produces a diagonal matrix S, with nonnegative diagonal elements in decreasing order, and unitary matrices U and V so that X = U-S-V*

The eigenvalues are the squares of the values in the main diagonal of S.

The eigenvectors are the columns of U. 3. Color image derivative calculation 220

A regular image derivative can be calculated only on a gray scale image. However, our input is a color image. The trivial solution is to transform the color image to a gray scale image and then calculate the derivative, but much information is lost in this conversion process.

We want to construct a smarter image derivative calculation that works directly on the color image, preventing this information loss.

The RGB color of each pixel is treated as a 3D vector, and the strength of the edge is the magnitude of the maximum gradient. We use J the Jacobian, whose elements are the partial derivatives of the RGB components with respect to x and y. The derivative strength is the greatest eigenvalue of:

4. Other possible contour extraction algorithms

- GrabCut is an image segmentation method based on graph cuts. The algorithm estimates the color distribution of the target object and that of the background using a Gaussian mixture model. This is used to construct a Markov random field over the pixel labels, with an energy function that prefers connected regions having the same label, and running a graph cut based optimization to infer their values. This procedure is repeated until convergence.

- Other graph partitioning methods include Graph Cut and Supervised Image Segmentation using MRF and MAP.

- Trainable segmentation techniques include various Neural Network approaches such as the Kohonen Map and Pulse-coupled Neural Networks.

Garment Deformation 225

1. Sub-contour Extraction 230 The output of the contour extraction module are two contours - the body contour and the garment contour. The objective of this phase is to calculate the exact part of the body contour, into which the one or more garments will be warped.

In fact, we need to calculate a body sub-contour which is relevant specifically to the one or more garments we are going to warp. For example, in case the garment is a shirt, the body contour part of the legs needs to be removed. The output must be a body sub- contour which has exactly the same amount of contour points as the garment(s) contour.

In order to achieve that, we use the garment-to-body matching described above. We iterate through garment edged defined by landmark pairs.

- If the garment edge is a closed edge, we take the corresponding contour portion between those two points.

- If the garment edge is an open edge (an edge thorough which a body part comes out), we take the corresponding contour portion from the garment using the open edges transformation described below.

2. Open Edges Transformation 235

This transformation is used to adjust the garment contour portion to the corresponding body contour portion.

Fig. 7 is a flowchart 700 showing the steps taken by the open edges transformation algorithm.

For each two matching landmarks (in the body contour and in the garment contour):

In step 710 we use Procrustes analysis algorithm to determine a linear transformation (translation, reflection, orthogonal rotation, and scaling) of the landmark points of the garment to best conform them to the landmark points of the body. The goodness-of-fit criterion is the sum of squared errors.

In step 720 we apply this linear transformation to the garment contour points between the two landmarks, and return the transformed points. 3. Garment mesh creation 240

In order to perform the garment deformation, a full mesh needs to be created from the contour. The triangle mesh should be as uniform as possible, meaning that the triangle areas, edges and angles should be as constant as possible throughout the mesh. This phase consists of the following steps:

i. Defining a nominal mesh edge size. This is a parameter of the algorithm.

ii. Diluting the contour according to this parameter. Contour points whose distances to their neighbors are less than this parameter are removed.

iii. Calculating shrunk and expanded contours. This is used by calculating the

normal to each contour point, and taking the points on both sides of the normal, with a distance equal to the nominal mesh edge size. Those contours are responsible for supporting the garment contour curvature during the optimization process to follow.

iv. Calculating an internal grid, inside of the shrunk contour, where the distances between the grid points are equal to the nominal mesh edge size.

v. The final grid consists of all the parts calculated so far: the diluted contour, the shrunk and expanded contours, and the internal grid.

vi. A final post-processing stage:

a. Points whose closest neighbor is closer than the nominal mesh edge size are removed.

b. Several iterations of optimization are made, where in each iteration, each point is moved to the average position of its neighbors.

Delaunay triangulation is a method in computational geometry that receives as an input a set of points, and returns a triangulation such that no point in the set is inside the circumcircle of any triangle.

In order to prevent slim triangles, this triangulation maximizes the minimum angle of all the angles of the triangles.

The final mesh is created by running the Delaunay triangulation algorithm on the resultant grid that we received after the stages depicted above. 4. Mesh optimization 245

During the mesh optimization process, the garment mesh is deformed to fit the body image. This stage's objective is to deform the garment such that the result will look as realistic as possible.

In order to do that, a non-linear optimization is used. We use the Trust-Region- Reflective Least Squares Algorithm, which is based on the interior-reflective Newton method.

This is an iterative algorithm. The idea behind the trust-region approach is that if we are currently in some point in space and we want to move to a point with a lower function value, we approximate the function we optimize with a simpler function which

reasonably reflects the behavior of our function in a neighborhood around the current point. This neighborhood is the trust region. A trial step is computed by minimizing over this neighborhood. Those iterations are repeated until convergence. The trust-region dimension is adjusted. In particular, it is decreased if the trial step is not accepted.

The function we are trying to optimize maps the mesh point coordinates to a cost matrix. We are ultimately trying to minimize the norm of cost.

The cost matrix is a concatenation of the following parts. Each component has its own weight in the final cost matrix. Those weights are a parameter of the algorithm.

• Bringing the garment contour points to the body contour: This is the purpose of the optimization. What we want is to bring the garment contour points, which are a part of the garment total mesh, as close as possible to the body sub-contour that was described above. As we recall, the garment contour and the body sub- contour have essentially the same number of points. We perform this procedure by minimizing the total Euclidean distance between those two point groups.

· Edge lengths constraint: We calculate the edge lengths of the mesh. Our

objective is to keep them as constant as possible during the optimization, in order to avoid unnecessary garment stretching. In order to achieve that, we minimize the distance norm between the current iteration edge lengths, and the original garment edge lengths.

· Laplacian Coordinates of the contour constraint: A curve Laplacian is defined for each point in the curve. It is computed only for the contour points and is zero for the rest of the mesh. Specifically, the curve Laplacian coordinate of a point is computed as the difference between the point and the average of its neighbors on the curve.

Ultimately, Laplacian coordinates account for the curvature of the contour. We keep them as constant as possible during the optimization to prevent

unnecessary distortions of the garment. This is done by minimizing the distance norm between the current iteration coordinates, and the original ones before the optimization.

• Mean value coordinates constraint: For each point in the mesh, we want to

maintain its relative position with respect to its neighboring points during the deformation process. This is done by keeping the mean value coordinates constant. The mean value coordinates can be calculated by:

Where α ; · is the angle formed by the vector v j - v t and v j+1 - v t . Normalizing each weight function w j by the sum of all weight functions yields the mean value coordinates of v t with respect to its neighboring points.

• Making sure that closed edges are out of the body: Obviously, closed garment edges cannot "enter" the body. Therefore, very high costs are given in those cases. For each contour point in a closed edge, we calculate the vector pointing from the current iteration's location of the point to its location on the sub-contour. Then, we calculate the dot product (projection) of that vector and the point's normal vector. If this product is positive, it means that the point is inside the body, and this product value is the cost. If it is negative, the point is outside the body as needed, and the cost is nullified.

The output of this phase is a new garment mesh that fits the user's body.

5. Other possible linear and non-linear mesh optimization algorithm are, for example: • Quadratic programming

• Levenberg-Marquardt method

• Gauss-Newton algorithm and Newton's method

• Finite element method

6. Field Morphing 250

Delaunay triangulation is a method in computational geometry that receives as input a set of points, and returns a triangulation such that no point in the set is inside the circumcircle of any triangle.

In order prevent slim triangles, this triangulation maximizes the minimum angle of all the angles of the triangles.

We use field morphing in order to warp the garment to fit the body image.

Fig. 8 is a flowchart 800 showing the steps taken by the field morphing algorithm.

In step 810 we calculate the Delaunay triangulation of the body sub-contour which was calculated in the previous phase.

In step 820 we calculate the Delaunay triangulation of the garment contour.

In step 830, for each triangle in the two triangulations, calculate whether this is an internal or external triangle. An internal triangle is a triangle that touches at least a single internal vertex.

In step 840, for every internal triangle in the triangulations, calculate the geometric transformation that takes the triangle vertices in the garment triangulation to the triangle vertices of the corresponding triangle in the body triangulation.

In step 850 we build a grid to correspond to the garment pixels and apply the calculated transformation to the grid section that belongs to the triangle to get the warped triangle.

In step 860 we copy this triangle to its place in the body image.

Repeat iteratively for each internal triangle. Post processing 255

1. Garment background removal 260

Although only the inner part of the garment contour is warped into the body image, this part might still contain background pixels which would pollute the resulting image, because of inaccuracies in the contour extraction phase. Therefore, after the warped garment is built, we automatically remove pixels which likely belong to the background. In order to do so, we use the region growing algorithm.

Region growing is a region-based image segmentation method, which takes as an input an image, an initial seed point, and a threshold. It then examines neighboring pixels of the seed point and determines whether the pixel neighbors should be added to the region according to the given threshold. The process is iterated on until there are no additional pixels to add to the region.

In order to use the algorithm, we paste the warped garment on its original background, and use the most upper-right pixel as a seed.

The algorithm returns a mask that corresponds to the garment background. We then use this mask to make the background pixels transparent, preventing background pixels from infiltrating the final result.

2. Image Blending 265

In order to make the final insertion of the garment image into the body image

seamless as possible, we use the pyramid blending algorithm.

Fig. 9 is a flowchart 900 showing the steps taken by the pyramid blending algorithm.

In step 910 the algorithm receives 3 inputs: the two images that need to be blended, and a binary mask that defines which part will be taken from each image.

The images are divided into their RGB components.

In step 920, for each component, we create its Laplacian pyramid. Let Li,2 be the Laplacian pyramids of image 1 and 2 respectively, of a given RGB component. In step 930 we create a Gaussian pyramid out of the binary mask. Let Gm be this pyramid.

In step 940 we merge all those pyramids into a single Laplacian pyramid using the formula:

Lout{i}= Gm{i}-Ll{i}+(1-Gm{i})-L2{i]

In step 950 we create the resultant image by collapsing the pyramid we calculated in the previous step.

Poisson Blending is another possible image blending algorithm which is based on solving Poisson equations.