Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR PROCESSING PHOTOGRAPHIC IMAGES
Document Type and Number:
WIPO Patent Application WO/2017/177259
Kind Code:
A1
Abstract:
There is disclosed a method of processing an image comprising: capturing an image containing a user's face; analysing the image and generating data relating to predetennined features of the user's face; comparing the generated data against model data representative of features of the user's ideal face characteristics to determine differences between said generated data and said model data in at least one of a plurality of facial features; and altering the image by varying at least one of said plurality of predetermined features of the user's face to minimise said differences; displaying said altered image to the user; and storing said altered image.

Inventors:
MOSS STEVEN (AU)
Application Number:
PCT/AU2017/000087
Publication Date:
October 19, 2017
Filing Date:
April 12, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PHI TECH PTY LTD (AU)
International Classes:
G06T19/00; G06K9/52; G06T5/50
Foreign References:
US20120299945A12012-11-29
US20150199558A12015-07-16
US20060132506A12006-06-22
Attorney, Agent or Firm:
DOHERTY, Gavin, Peter (AU)
Download PDF:
Claims:
The claims defining the invention are:

1 . A method of processing an image comprising:

capturing an image containing a user's face;

analysing the image and generating data relating to predetermined features of the user's face;

comparing the generated data against model data representative of features of the user's ideal face characteristics to determine differences between said generated data and said model data in at least one of a plurality of facial features; and

altering the image by varying at least one of said plurality of predetermined features of the user's face to minimise said differences; displaying said altered image to the user; and

storing said altered image.

2. A method according to claim 1, wherein the image is a digital photograph captured by a digital camera.

3. A method according to claim 1, wherein the image is a moving image.

4. A method according to claim 2 or claim 3, wherein the step of analysing the image comprises receiving the image and scanning the image to identify and analyse predetermined features of the user's face.

5. A method according to claim 4, wherein the predetermined features of the user's face include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.

6. A method according to claim 5, wherein the predetermined features of the user's face are measured by placing mapping points on said multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.

7. A method according to claim 6, wherein upon generating a 2D or 3D model of the user's face, a numerical code is created based on landmark features of the user's face.

8. A method according to claim 7, wherein the numerical code is searchable on a remote database of stored 2D or 3D models of user faces to identify the user in the digital photograph.

9. A method according to claim 6, wherein the step of comparing the generated data against model data representative of features of the user's

5 ideal face characteristics comprises comparing the generated 2D or 3D model of the user's face against a stored 2D or 3D model representative of the user's ideal face characteristics.

10. A method according to claim 9, wherein the stored 2D or 3D model representative of the user's ideal face characteristics is generated by the i o user's previous actions in identifying preferred altered images of their face.

1 1. A method according to claim 10, wherein the stored 2D or 3D model representative of the user's ideal face characteristics is constantly updated based on feedback from the user regarding preferred altered images.

15 12. A method according to claim 9, wherein the step of determining differences between said generated data and said model data comprises identifying the presence of optical distortion within the image.

13. A method according to claim 9, wherein the step of comparing the generated data against model data representative of features of the user's

20 ideal face characteristics comprises comparing the facial dimensions and proportions of any one or more of:

the user's facial shape including cheeks and chin;

the user's forehead height; the user's eyebrow shape;

the user's eye size and inter-eye distance and pupil position;

25 the user's nose shape;

the user's lip dimensions including length and height; and the user's skin clarity, texture and colour;

against the stored 2D or 3D model representative of the user's ideal face characteristics and determining differences between the relative 30 dimensions and proportions of the features.

14. A method according to claim 13, wherein the determining of the differences between the generated data and the model data further comprises determining differences between the face colour and texture.

15. A method according to claim 14, wherein the step of altering the image by

35 varying at least one of said plurality of predetermined features of the user's face to minimise said differences adjusting the dimensions and proportions of the user's face in the image to remove the differences between the relative dimensions and proportions of the features and by adjusting the face colour and texture of the user's face in the image to substantially accord with the model data.

A method according to claim 15, wherein the step of displaying the altered image to the user comprises displaying a plurality of altered images to the user, each altered image being altered from the original image a different percentage.

A method according to claim 16, wherein the plurality of altered images may also include mirror images of the original image and the altered image.

A method according to any one of claims 15 - 17, wherein the user is able to select each of the displayed altered images according to their preferences.

A method according to claim 18, wherein the most preferred altered image is stored by the user for retention.

A method according to claim 19, wherein the relative dimensions and proportions of the features of the most preferred altered image are used to update the model data representative of features of the user's ideal face characteristics.

A method according to any one of claims 1 - 20, wherein the steps of analysing the image, comparing the generated data, altering the image, displaying the altered image and storing the altered image are perfromed in real time.

A method according to any one of claims 1 - 20, wherein the steps of analysing the image, comparing the generated data, altering the image, displaying the altered image and storing the altered image are performed after the image has been captured and stored.

An image processing apparatus comprising:

an image capturing unit for capturing an image containing a user's face;

a processor for:

analysing the image and generating data relating to predetermined features of the user's face; comparing said generated data against model data representative of features of the user's ideal face characteristics; determining differences between said generated data and said model data in at least one of a plurality of facial features; and digitally altering the at least one of said plurality of predetermined features of the user's face in said image to minimise said differences;

displaying said digitally altered image to the user; and storing said digitally altered image.

An image processing apparatus according to claim 23, wherein the image capturing unit comprises a digital camera and the captured image is a digital photographic image.

An image processing apparatus according to claim 24, wherein the processor is a computer processor provided on the apparatus.

An image processing apparatus according to claim 25, wherein the processor is configured to receive the digital photograph image and comprises software for scanning the digital photographic image to identify and analyse predetermined features of the user's face.

An image processing apparatus according to claim 26, wherein the predetemiined features of the user's face that are identified and analysed include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.

An image processing apparatus according to claim 27, wherein the predetermined features of the user's face are measured by placing mapping points on multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.

An image processing apparatus according to claim 28, wherein upon generating a 2D or 3D model of the user's face, a numerical code is created based on landmark features of the user's face.

An image processing apparatus according to claim 29, wherein the numerical code is searchable on a remote database of stored 2D or 3D models of user faces to identify the user in the digital photograph.

An image processing apparatus according to claim 27, wherein the step of comparing the generated data against model data representative of features of the user's ideal face characteristics comprises comparing the generated 2D or 3D model of the user's face against a stored 2D or 3D model representative of the user's ideal face characteristics.

An image processing apparatus according to claim 31, wherein the stored 2D or 3D model representative of the user's ideal face characteristics is generated by the user's previous actions in identifying preferred altered images of their face.

An image processing apparatus according to claim 32, wherein the stored 2D or 3D model representative of the user's ideal face characteristics is constantly updated based on feedback from the user regarding preferred altered images.

An image processing apparatus according to claim 31, wherein the processor determines differences between said generated data and said model data including identifying the presence of optical distortion within the image.

An image processing apparatus according to claim 31 , wherein the processor compares the facial dimensions and proportions of any one or more of:

the user's facial shape including cheeks and chin;

the user's forehead height; the user's eyebrow shape;

the user's eye size and inter-eye distance and pupil position;

the user's nose shape;

the user's lip dimensions including length and height; and the user's skin clarity, texture and colour;

against the stored 2D or 3D model representative of the user's ideal face characteristics and determining differences between the relative dimensions and proportions of the features.

An image processing apparatus according to claim 35, wherein the processor determines differences between the face tone, clarity, colour and texture as well as eyes, lips and teeth.

An image processing apparatus according to claim 36, wherein the processor alters the image by adjusting the dimensions and proportions of the user's face in the image to remove the differences between the relative dimensions and proportions of the features and by adjusting the face tone, clarity, colour and texture of the user's face, including eyes, lips, and teeth in the image to substantially accord with the model data.

An image processing apparatus according to claim 37, wherein the processor displays a plurality of altered images to the user by way of a user interface, each altered image being altered from the original image a different percentage.

An image processing apparatus according to claim 38, wherein the processor also displays mirror images of the original image and the altered images to the user.

An image processing apparatus according to any one of claims 38 - 39, wherein the user interface is configured to enable the user to select each of the displayed altered images according to their preferences.

An image processing apparatus according to claim 40, wherein the most preferred altered image is stored in the processor for retention.

An image processing apparatus according to claim 23, wherein the relative dimensions and proportions of the features of the most preferred altered image are used by the processor to update the stored model data representative of features of the user's ideal face characteristics.

A process for performing alterations to a photographic image of a face to facilitate beautification of the facial image, comprising;

establishing a database of facial images across a variety individuals, each facial image being rated in accordance with a perceived level of attractiveness;

analysing each facial image and generating data relating to predetermined features of the face;

identifying consistencies between those predetermined features of the face across different levels of perceived attractiveness;

defining the dimension and range of measurements of the predetermined features which were perceived as being most attractive; creating a profile of an ideal beautiful face based on said defined dimensions and ranges of measurements.

A process according to claim 43, wherein the step of establishing a database of facial images comprises collecting digital photographs from individuals across a variety of age and ethnic groups.

45. A process according to claim 43, wherein the step of analysing each facial image and generating data relating to predetermined features of the face comprises receiving each digital photograph and scanning the digital photograph to identify and analyse predetermined features of the user's face.

46. A process according to claim 45, wherein the predetermined features of the user's face include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.

47. A process according to claim 46, wherein the predetermined features of the user's face are measured by placing mapping points on said multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.

48. A process according to claim 43, wherein the step of identifying consistencies between those predetermined features of the face across different levels of perceived attractiveness comprises comparing each of the images and establishing a range of consistencies between features deemed as attractive and unattractive.

49. A process according to claim 43 wherein the step of defining the dimension and range of measurements of the predetermined features which were perceived as being most attractive comprises statistically analysing the measurements of the predetermined features to determine ranges of dimensions for each of the identified facial features.

50. A method of establishing a database of facial features representing attractiveness comprising:

creating a user platform that enables users to register with the platform by submitting a self-image of their face upon registration ;

enabling members to post self-images to the platform for rating by other members of the platform;

posting third party images of faces to the platform for rating by members of the platform; and

analysing each of the self-images and third party images to determine characteristics of facial features considered as attractive and not attractive.

51. A method according to claim 50, wherein the step of registering with the platform also requires the user to rate various alterations of their self-image

5 in accordance with attractiveness.

52. A method according to claim 51 , wherein images are rated based on whether they are liked or disliked.

53. A method according to claim 50, wherein the third party images include face images from beauty and fashion sources. io 54. A method of recording facial features of an individual in an image comprising:

directing said individual to capture an image of their face;

mapping a plurality of predetermined points on the captured image of the individual's face;

15 extracting dimensions and measurements of said predetermined points together with other relevant data associated with the individual; and

generating a model of the individual's head and face based on said extracted dimensions and measurements.

20 55. A method according to claim 54, wherein the step of directing the individual to capture the image of their face comprises providing visual and audio cues to guide the individual to position the camera at a predetermined location for taking the image.

56. A method according to claim 55, wherein the predetermined location of 25 the camera includes a location to facilitate a front repose, lateral repose and a 90° in/out plane rotation of the individual's face.

57. A method according to claim 54, wherein the step of mapping a plurality of predetermined points on the captured image comprises placing a minimum of 72-101 mapping points around the individual's facial shape,

30 includingcheeks and chin; forehead; eyebrow; eyes; nose; and lips.

58. A method according to claim 57, wherein the step of extracting dimensions and measurements of said predetermined points comprises establishing facial shape of the individual's cheeks and chin; the individual's forehead height; the individual's eyebrow shape; the

35 individual's eye size and inter-eye distance and pupil position; the individual's nose shape; and the individual's lip dimensions including length and height.

A method according to claim 58, wherein the other relevant data associated with the individual includes the individual's face tone, skin clarity, texture and colour.

A method according to claim 54, wherein the step of generating a model of the individual's head and face comprises generating a 2D or 3D model of the individual's face replicating the extracted dimensions and measurements of the predetermined points.

A method of determining the presence of optical distortion in an image of a user comprising:

establishing a predetermined model of the user's face;

mapping a plurality of predetermined features on the image of the user's face and extracting dimensions and measurements of said predetermined features; and

comparing the extracted dimensions and measurements of the predetermined features against the dimensions and measurements of those features on the predetermined model of the user's face; and

determining a presence of any optical distortion in the image based on any differences between the compared extracted dimensions and measurements of the predetermined features and the dimensions and measurements of those features on the predetermined model of the user's face being at or above a predetermined level.

A method according to claim 61, wherein the step of mapping a plurality of predetermined features on the image comprises placing a minimum of 72-101 mapping points around the user's cheeks and chin; forehead; eyebrow; eyes; nose; and lips and extracting dimensions and measurements of those predetermined features to establish facial shape of the user's cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; and the user's lip dimensions including length and height.

A method according to claim 62, wherein an optical distortion level is identified by establishing a difference in the dimensions and measurements of the predetermined features in the image and the dimensions and measurements of those features on the predetermined model of the user's face. A method of determining accurate assessment of the attractiveness of an image comprising:

presenting an image to a user for assessment;

monitoring a plurality of predetermined facial features of the user as they assess the image;

extracting dimensions and measurements of said predetermined features of the user to determine the emotion of the user as they review the image;

receiving an assessment of attractiveness of the image from the user; and

determining whether the assessment of attractiveness is to be recorded or rejected based on the determined emotion of the user.

A method according to claim 64, wherein the image is presented to the user via an electronic interface.

A method according to claim 65, wherein the step of monitoring the plurality of predetermined facial features of the user comprises using a camera on the electronic interface to monitor the user.

A method according to claim 66, wherein the emotion of the user is determined by comparing the extracted dimensions and measurements of the predetermined facial features of the user against a predetermined model of facial features associated with an emotion.

A method according to claim 67 wherein, the emotions are determined between any one of happiness, sadness, anger, fear, surprise, disgust and neutral.

Description:
SYSTEM AND METHOD FOR PROCESSING PHOTOGRAPHIC

I MAGES

RELATED APPLICATIONS

The present application claims priority from previously filed Australian Provisional Patent Application 2016901367 filed 12 April 2016, the entire contents of which are incorporated herein by reference.

FIELD OF INVENTION

The present invention relates generally to an image processing system and method, and in particular, to a system and method for processing a photographic image of a user to provide beautification of the image in accordance with predetermined beauty preferences.

BACKGROUND OF THE INVENTION

The widespread adoption of digital devices, such as smart phones and the like, has resulted in a significant increase in the number of photographs and videos taken and stored across a variety of mediums. With the advent of digital photography and availability of digital cameras in most smart phones, individuals are able to readily take photographs and videos of people and events, as they happen, with the photograph/video becoming immediately available for review and critique. Similarly, with an increase in the popularity and accessibility of social media sites such as Facebook®, Pinterest®, Snapchat® and the like, most photographs/videos taken can be easily downloaded and shared with a variety of people across the world as they are taken and reviewed. In figures released by Facebook® in May 2015 alone approximately 2 billion photographs were shared every day between approximately 1.4 billion registered users. The popularity of self-taken photographs or portraits, generally referred to as "selfies", is a particularly common form of photograph. Many individuals take selfies to capture themselves experiencing a moment either alone or with others, which can be simply posted onto social media platforms for sharing with the individuals contacts and the general public. This phenomenon has become so popular that there are dedicated extension devices provided for use with smart phones to facilitate such photographs, generally referred to as a "selfie stick".

However, for many individuals, the act of reviewing photographs of their own image generates negative emotions, inhibitions and an overwhelming sense of unease, which for many, can be a significant deterrent. As most individuals are accustomed to a specific aspect of their own image, namely that which is provided by a mirror or reflection, when they are presented with a photograph that is not an inverse image, they may consider their image to be ugly or distorted. This can result in the individual discarding the image or becoming depressed with others seeing the photograph. This is typically because for most people, their face is not symmetrical and by looking at their image in a mirror, the individual becomes accustomed to the way their image is presented. This is the case even in the presence of imperfections, such as one eyebrow being slightly higher than the other or how their smile may be slightly distorted or skewed. Therefore, when an individual sees a photograph of themselves that is not an inverted mirror image, they are seeing their face from an unfamiliar perspective, which for many can be strange and off-putting and an overall unpleasant experience.

In this regard, there exist a variety of different programs or applications that allow an individual to vary their electronic image in a photograph. For users of SNAPCHAT®, photographs taken by individuals are typically flipped to replicate the image a person is likely to see in a mirror, such that they are more likely to find their image to their satisfaction. Similarly, for individuals with skin blemishes and other scars or imperfections, a variety of photo editing software applications have been developed to enable an individual to remove or smooth such features. These software applications also have the ability to provide wrinkle reduction and face slimming as well as teeth whitening and other such edits to an individual's image. However, to use such software applications the user must apply the various filters individually and manually for each photograph and review and accept the changes based on what the end result looks like to them, which is generally a matter of personal taste, rather than any commonly accepted standard or beauty ideal.

A variety of scientific studies and theories exist regarding how beauty may be measured and what constitutes what can be universally considered as a "beautiful" face or profile. One such study that seeks to analyse and assess a face to apply a measure of beauty associated with such a face for surgical, cosmetic and/or identification purposes is described in US Patent No. 5,659,625 in the name of Stephen R. Marquardt, the contents of which are incorporated herein by reference. This patent describes a manner in which an individual's face can be analysed to identify the major anthropometric points on an image of the face to be compared against a mask or overlay system associated with an aesthetically ideal human face. Whilst such research is available, it is generally complicated to apply to images, such as photographs and video, taken in real time and is more directed towards providing an assessment of an individual's face against a perceived ideal human face for cosmetic or surgical intervention.

Thus, there is a need to provide an automated system for applying digital alteration to a photograph as it is taken in real time which seeks to beautify the individual's face and/or body in relation to an ideal model and which enables a degree of personalisation, taking into consideration the preferences of the individual and others, over time

The above references to and descriptions of prior proposals or products are not intended to be, and are not to be construed as, statements or admissions of common general knowledge in the art. In particular, the following prior art discussion does not relate to what is commonly or well known by the person skilled in the art, but assists in the understanding of the inventive step of the present invention of which the identification of pertinent prior art proposals is but one part.

STATEMENT OF INVENTION

According to a first aspect, there is provided a method of processing an image comprising:

capturing an image containing a user's face;

analysing the image and generating data relating to predetermined features of the user's face;

comparing the generated data against model data representative of features of the user's ideal face characteristics to determine differences between said generated data and said model data in at least one of a plurality of facial features; and

altering the image by varying at least one of said plurality of predetermined features of the user's face to minimise said differences; displaying said altered image to the user; and

storing said altered image.

In an embodiment of this aspect, the image is a digital photograph captured by a digital camera. On another embodiment, the image is a moving image, such as a video.

The step of analysing the image may comprise receiving the digital photograph and scanning the digital photograph to identify and analyse predetermined features of the user's face. The predetermined features of the user's face may include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter- eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.

In an embodiment, the predetermined features of the user's face are measured by placing mapping points on said multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.

In another embodiment, upon generating a 2D or 3D model of the user's face, a numerical code may be created based on landmark features of the user's face. The numerical code may be searchable on a remote database of stored 2D or 3D models of user faces to identify the user in the digital photograph.

The step of comparing the generated data against model data representative of features of the user's ideal face characteristics may comprise comparing the generated 2D or 3D model of the user's face against a stored 2D or 3D model representative of the user's ideal facial characteristics. The stored 2D or 3D model representative of the user's ideal face characteristics may be generated by the user's previous actions in identifying preferred altered images of their facial characteristics. The stored 2D or 3D model representative of the user's ideal face characteristics may be constantly updated based on feedback from the user regarding preferred altered images.

The step of determining differences between the generated data and the model data may comprise identifying the presence of optical distortion within the image. The differences determined between the generated data and the model data may comprise differences in the facial dimensions and proportions of any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour; against the stored 2D or 3D model representative of the user's ideal face characteristics and determining differences between the relative dimensions and proportions of the features.

The step of determining of the differences between the generated data and the model data may further comprise determining differences between the face tone, clarity, colour and texture, including colour of lips, teeth and eyes. In an embodiment, the step of altering the image by varying at least one of said plurality of predetermined features of the user's face to minimise said differences adjusting the dimensions and proportions of the user's face in the image to remove the differences between the relative dimensions and proportions of the features and by adjusting the face tone, clarity, colour and texture of the user's face in the image to substantially accord with the model data. A plurality of altered images may be supplied to the user, each altered image being altered from the original image a different percentage. The plurality of altered images may comprise images that are altered to remove 100% of the differences, 75% of the differences; 50% of the differences; 25% of the differences and 0% of the differences. The plurality of altered images may also include mirror images of the original image and the altered image. In a preferred form, the user is able to select each of the displayed altered images according to their preferences. The most preferred altered image may be stored by the user for retention. The relati ve dimensions and proportions of the features of the most prefened altered image may also be used to update the model data representative of features of the user's ideal face characteristics.

According to a second aspect, there is provided an image processing apparatus comprising:

an image capturing unit for capturing an image containing a user's face;

a processor for:

analysing the image and generating data relating to predetermined features of the user's face;

comparing said generated data against model data representative of features of the user's ideal face characteristics; determining differences between said generated data and said model data in at least one of a plurality of facial features; and digitally altering the at least one of said plurality of predetermined features of the user's face in said image to minimise said differences;

displaying said digitally altered image to the user; and storing said digitally altered image.

In an embodiment of this aspect, the image capturing unit comprises a digital camera and the captured image is a digital photographic image. In another embodiment, the image capturing unit comprises a video camera and the captured image is a moving image. The processor may be a computer processor provided on the apparatus. The processor may be configured to receive the captured image and may comprise software for scanning the image to identify and analyse predetermined features of the user's face.

The predetermined features of the user's face that are identified and analysed may include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour.

The predetermined features of the user's face may be measured by placing mapping points on multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face. Upon generation of the 2D or 3D model of the user's face, a numerical code may be created based on landmark features of the user's face. The numerical code may be searchable by the controller on a remote database of stored 2D or 3D models of user faces to identify the user in the captured image.

The controller may compare the generated data against model data representative of features of the user's ideal face characteristics by comparing the generated 2D or 3D model of the user's face against a stored 2D or 3D model representative of the user's ideal face characteristics. The stored 2D or 3D model representative of the user's ideal facial characteristics may be generated by the user's previous actions in identifying preferred altered images of their face. The stored 2D or 3D model representative of the user's ideal face characteristics may be constantly updated based on feedback from the user regarding preferred altered images.

The processor may determine differences between said generated data and said model data including identifying the presence of optical distortion within the image.

The processor may compare the facial dimensions and proportions of any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour; against the stored 2D or 3D model representative of the user's ideal face characteristics and determining differences between the relative dimensions and proportions of the features.

The processor may further determine differences between the face, tone, clarity colour and texture, including lips, teeth and eyes of the image and the model image.

The processor may alter the image by adjusting the dimensions and proportions of the user's face in the image to remove the differences between the relative dimensions and proportions of the features and by adjusting the face, tone, clarity, colour and texture, including lips teeth and eyes of the user's face in the image to substantially accord with the model data.

The processor may display a plurality of altered images to the user by way of a user interface, each altered image may be altered from the original image a different percentage. The plurality of altered images may comprise images that are altered to remove 100% of the differences, 75% of the differences; 50% of the differences; 25% of the differences and 0% of the differences. The processor may also display mirror images of the original image and the altered images to the user.

The user interface may be configured to enable the user to select each of the displayed altered images according to their preferences and rate their preferred image for attractiveness. The most preferred altered image may be stored in the processor for retention. The relative dimensions and proportions of the features of the most preferred altered image may also be used by the processor to update the stored model data representative of features of the user's ideal face characteristics

In accordance with another aspect, there is provided a method for processing an image of a human face to facilitate improved beautification of said face; the method comprising:

taking a digital image containing at least a partial image of a user's face; analysing said at least partial image of the user's face to collect and identify major anthropometric features associated with a user's face;

comparing said major anthropometric features associated with a user's face against major anthropometric features associated with a scientifically determined ideal human face;

digitally modifying said major anthropometric features associated with the user's face to more closely replicate the major anthropometric features associated with a scientifically determined ideal human face. The step of taking a digital image may comprise taking both a full frontal image of the user and a lateral image of the face of the user.

The step of analysing the at least partial image of the user's face may comprise applying a facial recognition and mapping algorithm to identify major anthropometric features associated with a user's face. The major anthropomorphic features may comprise any one or more of facial shape including cheeks and chin; forehead height; eyebrow shape; eye size and inter- eye distance; nose shape and lip shape, length and height.

The step of digitally modifying the major anthropometric features associated with a user's face may comprise applying any one or more of Face Shaping, Skin Smoothing & Removal of imperfections; Face Contouring & Highlighting - make up; Wrinkle Reduction; Crow's Feet Removal; Nose Sharpening & Shaping; Wider Eyes; Eye Enlargement ; Eye Brightening; Red Eye Reduction; Teeth Whitening ; Pouting Lips; Soft Chin; and Chin Lifting

The step of digitally modifying the major anthropometric features may comprise a further step of applying multiple image options to the user for review. The multiple image options may be generated by applying a combination of different modifications to the image for selection by the user. The user may select the most preferred option of the multiple of options and the selection parameters may be stored for application the next time the image is taken.

According to yet another aspect, the present invention comprises a process for performing alterations to a photographic image of a face to facilitate beautification of the facial image, comprising;

establishing a database of facial images across a variety individuals, each facial image being rated in accordance with a perceived level of attractiveness;

analysing each facial image and generating data relating to predetermined features of the face;

identifying consistencies between those predetermined features of the face across different levels of perceived attractiveness;

defining the dimension and range of measurements of the predetermined features which were perceived as being most attractive; creating a profile of an ideal beautiful face based on said defined dimensions and ranges of measurements.

In an embodiment of this aspect, the step of establishing a database of facial images comprises collecting digital photographs from individuals across a variety of age and ethnic groups.

The step of analysing each facial image and generating data relating to predetermined features of the face may comprise receiving each digital photograph and scanning the digital photograph to identify and analyse predetermined features of the user's face. The predetermined features of the user's face may include any one or more of: the user's facial shape including cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; the user's lip dimensions including length and height; and the user's skin clarity, texture and colour. The predetermined features of the user's face may be measured by placing mapping points on said multiple predetermined features of the image of the user's face and measuring distances and angles between the various mapping points to generate a 2D or 3D model of the user's face.

In an embodiment of this aspect of the invention, the step of identifying consistencies between those predetermined features of the face across different levels of perceived attractiveness comprises comparing each of the images and establishing a range of consistencies between features deemed as attractive.

The step of defining the dimension and range of measurements of the predetermined features which were perceived as being most attractive may comprise statistically analysing the measurements of the predetermined features to determine ranges of dimensions for each of the identified facial features.

According to yet another aspect of the present invention there is provided a method of establishing a database of facial features representing attractiveness comprising:

creating a user platform that enables users to register with the platform by submitting a self-image of their face upon registration ;

enabling members to post self-images to the platform for rating by other members of the platform;

posting third party images of faces to the platform for rating by members of the platform; and

analysing each of the self-images and third party images to determine characteristics of facial features considered as attractive and not attractive.

In an embodiment of this aspect of the invention, the step of registering with the platfomi may also require the user to rate various alterations of their self-image in accordance with attractiveness. The images may be rated based on whether they are liked or disliked.

The third party images may include facial images from beauty and fashion sources.

According to yet another aspect of the present invention there is provided a method of recording facial features of an individual in an image comprising:

directing said individual to capture an image of their face;

mapping a plurality of predetermined points on the captured image of the individual's face;

extracting dimensions and measurements of said predetermined points together with other relevant data associated with the individual; and

generating a model of the individual's head and face based on said extracted dimensions and measurements.

In an embodiment of this aspect, the step of directing the individual to capture the image of their face comprises providing visual and audio cues to guide the individual to position the camera at a predetermined position for taking the image.

The predetermined position of the camera may include a position to facilitate a front repose, lateral repose and a 90° in/out plane rotation of the individual's face.

The step of mapping a plurality of predetermined points on the captured image may comprise placing a minimum of 72-101 mapping points around the individual's facial shape including cheeks and chin; forehead; eyebrow; eyes; nose; and lips.

The step of extracting dimensions and measurements of said predetermined points may comprise establishing facial shape of the individual's facial shape including cheeks and chin; the individual's forehead height; the individual's eyebrow shape; the individual's eye size and inter-eye distance and pupil position; the individual's nose shape; and the individual's lip dimensions including length and height. The other relevant data associated with the individual may include the individual's skin clarity, texture and colour.

The step of generating a model of the individual's head and face may comprise generating a 2D or 3D model of the individual's face replicating the extracted dimensions and measurements of the predetermined points. According to yet another aspect of the present invention there is provided a method of determining the presence of optical distortion in an image of a user comprising:

establishing a predetermined model of the user's face; mapping a plurality of predetermined features on the image of the user's face and extracting dimensions and measurements of said predetermined features; and

comparing the extracted dimensions and measurements of the predetermined features against the dimensions and measurements of those features on the predetermined model of the user's face; and

determining a presence of any optical distortion in the image based on any differences between the compared extracted dimensions and measurements of the predetermined features and the dimensions and measurements of those features on the predetermined model of the user's face being at or above a predetermined level.

In an embodiment of this aspect, the step of mapping a plurality of predetermined features on the image comprises placing a minimum of 72-101 mapping points around the user's cheeks and chin; forehead; eyebrow; eyes; nose; and lips and extracting dimensions and measurements of those predetermined features to establish facial shape of the user's cheeks and chin; the user's forehead height; the user's eyebrow shape; the user's eye size and inter-eye distance and pupil position; the user's nose shape; and the user's lip dimensions including length and height.

An optical distortion level may be identified by establishing a difference in the dimensions and measurements of the predetermined features in the image and the dimensions and measurements of those features on the predetemiined model of the user's face.

According to yet another aspect of the present invention there is provided a method of determining accurate assessment of the attractiveness of an image comprising:

presenting an image to a user for assessment;

monitoring a plurality of predetermined facial features of the user as they assess the image;

extracting dimensions and measurements of said predetermined features of the user to determine the emotion of the user as they review the image;

receiving an assessment of attractiveness of the image from the user; and

determining whether the assessment of attractiveness is to be recorded or rejected based on the determined emotion of the user.

In an embodiment of this aspect, the image is presented to the user via an electronic interface. The step of monitoring the plurality of predetermined facial features of the user may comprise using a camera on the electronic interface to monitor the user.

The emotion of the user may be determined by comparing the extracted dimensions and measurements of the predetermined facial features of the user against a predetermined model of facial features associated with an emotion.

The emotions may be determined between any one of happiness, sadness, anger, fear, surprise, disgust and neutral.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention may be better understood from the following non-limiting description of preferred embodiments, in which:

Fig. 1 is a diagram of a system for use in the manipulation, display and storage of photographic images of individuals in real-time;

Fig. 2 is a flow chart depicting a method for users to create an account with the system of Fig. 1 in accordance with a first embodiment;

Fig. 3 is a flow chart depicting a method for initialising a beautification algorithm for a user in accordance with a first embodiment

Fig. 4 is a screen shot depicting a step of the method of Fig. 3;

Fig. 5 is a screen shot depicting a step of the method of Fig. 3; and

Fig. 6 is a screen shot depicting a step of the method of Fig. 3;

Fig. 7 is a flow chart depicting an embodiment by which a facilal database is created and applied into the facial imaging software of the present invention;

Fig. 8 is a flow chart depicting an embodiment of how the system and method of the present invention is employed to beautification to an image taken by a camera of the present invention; Fig. 9 is a diagrammatical depiction of 2D or 3D models of a face and facial features which can be generated for each user in accordance to the present invention;

Fig. 10 is a flow chart depicting a manner in which a beautification algorithm can be applied for a specific user; and

Fig. 1 1 is a flow chart depicting a manner in which image correction can be applied to a user's image based on the beautification algorithm of Fig. 10.

DETAILED DESCRIPTION OF THE DRAWINGS

Preferred features of the present invention will now be described with particular reference to the accompanying drawings. However, it is to be understood that the features illustrated in and described with reference to the drawings are not to be construed as limiting on the scope of the invention.

The system and method of the present invention will be described below in relation to its application for use in the manipulation, display and storage of photographic images of individuals in real-time. It will be appreciated that the system and method of the present invention may also be applicable for use with videos and other imaging technologies where a user's face is present either alone or with other individuals.

Referring to Fig. 1 , an embodiment of a system 10 in accordance with the present invention is depicted. The system 10 will be referred to as a photographic image processing platform. The system 10 generally includes a network 14 that facilitates communication between a host service 11 and one or more remote services 16. The system 10 also facilitates communication of the host service 1 1 and remote services 16, with one or more third party servers 17 as will be discussed in more detail below.

The host service 1 1 is depicted as comprising one or more host servers 12 that communicate with the network 14 via wired or wireless communication, as will be appreciated by those skilled in the art. The one or more host servers 12 are configured to store a variety of information collected by each of the remote services 16 as well as to exchange data between third party servers 17 via the network 14. The host servers 12 are also able to house multiple databases necessary for the operation of the methods and systems of the present invention and for the storage of information collected from the individual users of the remote services 16. The servers 12 may comprise any of a number of servers known to those skilled in the art and are intended to be operably connected to the network 14 so as to operably link to the plurality of remote services 16. The servers 12 typically include a central processing unit or CPU that includes one or more microprocessors and memoiy operably connected to the CPU. The memory can include any combination of random access memory (RAM), a storage medium such as a magnetic hard disk drive(s) and the like.

In a preferred embodiment, the distributed computing network 14 is the internet or a dedicated mobile or cellular network in combination with the internet, such as a GSM, CDMA, EDGE, LTE, HSDPA/HSPA, EV-DO or WCDMA network. Other types of networks such as an intranet, an extranet, a virtual private network (VPN) and non-TCP/IP based networks are also envisaged.

The remote services 16 are configured for use by users who are registered with the host service 1 1. The remote services 16 is typically in the form of a smart phone, tablet or similar portable computing device that is configured with a dedicated software application and camera technology to enable a user to take photographs and review the resultant modified images for transmission to other users 16, third party servers 17 and/or the host service 1 1, in real time. The manner in which this is achieved will be discussed in more detail below. The remote service 16 may also be configured such that it is able to communicate with the host service 1 1 via a mobile web browser thereby obviating the need for the remote service 16 to download software for this purpose.

The third party servers 17 may include existing social media platforms and the like, that facilitate the download, display, sharing and storage of photographs taken and modified by the users, such as Facebook®, SNAPCHAT®, and the like. The host service 1 1 is able to communicate with the third party servers 17 via the network 14 to obtain specific information about the photographs/images posted on the third party servers in accordance with the system and method of the present invention.

The memoiy of the servers 12 may be used for storing an operating system, databases, software applications and the like for execution on the CPU. As will be discussed in more detail below, in a preferred embodiment the database stores data relating to each registered user of the system 10, as well as information relating to settings and preferences identified by the user, and other users, over time.

As discussed above, each user is connected to the network 14 by way of their remote service 16. The remote service 16 stores one or more programs that include executable code to facilitate operation of a software application or "app", which is configured to provide an interface between the remote service 16 and the host service 1 1 , as well as to control operation of the camera device present on the remote service 16. Such an arrangement enables communication therebetween, as well as between other remote services 16, depending upon the type of user and the overal l requirements of the system.

In one embodiment of the present invention, the functionality of the remote service 16 is provided by the type of software application that is installed in the local non-volatile storage of the remote service 16 and which is executed by the internal processor of the remote service 16. The software application may be downloaded to the remote service 16 via the network 14 from the host service 11. Alternatively, the software application may be purchased or otherwise downloaded through a software application provider, such as iTunes®, Google© Play and the like, for storage on the remote service 16. In this regard, the remote service 16, may provide a means for a user to collect and transfer information to the host server 12 via the network 14 automatically, by transmitting data collected by the remote service 16 which is captured in a form that can be readily transmitted between the remote service 16 each time the user takes a photograph, and the host service 1 1.

In one embodiment of the present invention, in order for a user to obtain authorisation to use the system 10 of the present invention, the user is required to register with the host service 11 in accordance with the method 20 as set out in Fig. 2.

In step 21 , a user, via their remote service 16, downloads a software application for use of the present system and method. The user may download the software application directly from the host service 11 by way of a guest user interface, or may obtain the software application from an on-line software application store, such as Google® Play, or iTunes®. The user may be charged a small fee to obtain the software application or the software application may be supplied to the user free of charge, with the user being charged based on their use of the software application, such as the number of images processed by the application or by way of an equivalent assessment of use. In another embodiment, the user may be charged a yearly membership fee upon activation of the software application or may be provided with an initial service having limited functionality which can be upgraded upon purchase of the full version of the software application. In some embodiments access may be free. Irrespective of the manner in which the user is permitted access to the software application of the present system, upon access to the host service 1 1, the software application is downloaded into the user's remote service 16.

In step 22, the user creates their login details. The login details include the preferred means by which the user accesses the application, which will be stored with the host service to generate a profile that is to be stored in the memory of the host servers 12 for each user. In this regard, the user may be able to log-in to use the software application by two preferred methods. Firstly, the user may login by way of their social media site of preference, such as FACEBOOK®, such that their social media account will be linked to their account generated with the host service 1 1 of the present application. The other preferred login method will be via a conventional login name and login password which can be generated by the user. Typically, the login name will be the user's preferred email address and they will be required to generate a password to complete the connection. Should the user forget their password, facilities will be provided for retrieving forgotten passwords or generating new passwords.

In step 23, the user is required to create their profile by entering important personal details to assist the application in generating a beautification algorithm specific for the user. The software application will typically direct the user to a dedicated screen to enable the user to enter their details such as name, address, contact details, gender and date of birth. Depending on the age or ethnicity of the user, a message may appear that explains that the application is optimised for certain ages or ethnicities and for some age groups the level of beautification applied by the present invention may vary. The user may be required to enter other details such as their height, ethnicity or race, and any other details considered relevant to assist the software application to effectively and accurately generate a beautification algorithm to beautify the user's image. The user may also register their social networks with the host service 11 , such that the host service is able to access data from the user's social network pages. In this regard, when the software application is initiated, the user is able to register with the host service 1 1 through their Facebook® profile.

In step 24, the user will be asked to set their preferences for using the application. A preferred embodiment of this step will be described in more detail below, but generally involves the user, under instruction from the application, taking a series of photographs of their face and head in various predetermined positions to enable the application to perform facial mapping on the user. This will then be used by the application to generate the beautification algorithm that creates a number of beautified images of the user, for the user to review. The user will then select the most preferable of the images. These algorithm settings will then be saved for the user, as the user's beautification algorithm.

In step 25, the user completes the registration process and creates their account with the host service 1 1 , whereby the user's preferences will be stored in their account for ongoing reference and analysis.

It will be appreciated that following creation of the account in method 20, each time the user logs-in to the application they will be directed to a camera screen for capturing photos and videos in which the user's image will be beautified in accordance with that user's specific beautification algorithm. This may occur instantly and in real time as the user takes the image, or may occur following capture of the image.

The method 30 for setting the specific beautification algorithm containing the user preferences and for calibrating the user's application for use is depicted in Fig. 3.

In step 31 , the user is prompted to take a photograph of their face and head in frontal position (Repose & Smiling). This is achieved by the software application directing the user to a screen depicting the device's camera which will be set to front-facing mode. The software application will include instructions on how the user should take the photograph of their face front-on. This will include instructions via audio or visual cues to indicate that their face is in the correct position. Once the correct facial position or alignment has been achieved, a photo will be taken automatically. An example screen shoot depicting this step is shown in Fig. 4.

In step 32, the application will then use the photograph to recognise and place a minimum of 72-101 markers or mapping points on the users face and record measurements of facial features. This is the facial mapping process.

In step 33, the application will apply beautification level adjustment to the photograph taken in step 31 and presents a variety of frontal view beautified images to the user for review. By way of example, in one embodiment, this may include providing eight versions of the same frontal image, with four of those images presented in "Camera View Mode" and the other four images presented in Mirror Flip Mode for review. Three each of the two sets of images will have a different percentage of beautification level applied to the user's face in the beautification algorithm. Therefore, two of the images will have no beautification applied and will be kept as the original image, and the mirror of the original image. The user will be presented with these images in a random manner as depicted in Fig. 5. The user is able to manually adjust the beautification modification percentage applied to the image in accordance with their personal preferences. The user will be able to adjust this percentage by manually increasing or decreasing an adjustment bar. This setting may be applied to all images or only a selection of images. Changes to the face will be made in real time as the user moves the adjustment bar. This will include adjustments to the geometiy of the head & face as well as the facial characteristics. Such functionality allows the user to adjust the beautification level based on what they prefer and consider as ideal, as well as how they perceive themselves as being more or less attractive. Note that modification changes will take effect in real time as the adjustment bar is increased or decreased.

In step 34, the user is required to rate the images in order of preference. Each of the images will be displayed in groups of four in a 2x2 grid. Each image can be viewed individually in larger dimensions by selecting the image and clicking an expand button. The user is then able to select their preferences. In cases where some or any of the images are not liked by the user, there will be an ability for the user to return to the previous screen to take new images. The user will be required to rate their images by giving each image a number from T- '8' where ' 1 ' would be an image they like most and '8' would be an image they like least.

In step 35, each of the rated images are saved into the host server database together with the modification data surrounding preferences selected by the user. The modification data may include the features modified and the level of modification applied to those features. This information will then be collated by the host server and processed by the host server's Machine Learning Engine to analyse the data and improve the beautification adjustment system over time. The most preferred image for which a rating of Ί ', is the only image saved onto the user's device camera roll. The data derived by the user simply making these adjustments and selecting their preferences (favourite image) is critical and helps understand the perception of beauty and adds to improving or fine tuning the Facial Profile of an Ideal Beautiful Face (FPIBF) and ultimately defining universal beauty. Once the preferences have been selected by the user, a 2D or 3D model of the user's head and face is created based on these preferences, as depicted in Fig. 9. This 2D or 3D model contains the relevant dimensions and proportions of the seven most important facial features as discussed below which is then recorded in the Host Service database against each registered user's profile. Upon completion of step 35, the user is then redirected back to step 31 to repeat the process with photographs of the head and face of the user in a lateral or side position (Repose and Smiling), as depicted in Fig. 6. Upon completion of the step 35 for the side picture, the account is created and the application becomes calibrated and ready for use.

Following creation of the account, when the user next logs in to use the application they will be automatically directed to the camera screen and it is through this screen that photos and videos can be captured with the image of the user beautified instantly and in real time.

The user's device camera will be set to front-facing mode by default and all photos and videos taken via this screen will apply the beautification algorithm preferred by the user instantly and in real-time (i.e through camera pre-image capture) once the subject's face is recognised by camera. In this regard, beautification will be applied only to the user's face and not anyone else found in the photo and the beautification algorithm will be applied whether or not the user clicks on the capture button. As discussed above, this may occur in real time at pre-image capture, or following image capture when the image is stored. Multiple photos can be taken without needing to review or confirm the image. The user remains on the camera screen once they have taken a photo (as a beautified image).

Once an image is captured, the photo will be saved in the host server's library. A copy of the photo will also be saved in the Album of the device camera roll. Multiple versions of the same photo will not be saved (i.e the eight beautified versions), only the one that represents the preferred image as selected by the user.

It will be appreciated that the system and method as described above provides a simple means for a user to download the software application and to simply customise and use the system. This then provides a system whereby every photograph taken after the initial registration and installation process has an automatic beautification algorithm applied to it in accordance with the user's face type and preferences. This may occur in real time and instantly, as soon as the camera recognises or tracks the user's face, or may occur after the image is taken and stored.

In this regard, as the system is able to analyse each user's facial features to a significant degree as part of the facial image assessment of the software application of the present invention, the system is able to collect the data and compare the collected data with stored data for recognition purposes. In this regard, the present system is able to recognise the user within a photograph even if the user is part of a group photograph. Such facial recognition is possible due to the ability of the software application to identify major anthropometric points or features of the user's face, irrespective of the user's age, skin colour or sex. This can also be achieved if the user is wearing make-up or a hat or, in the case of males, the user has grown a beard. In group photo situations, the software application of the present system will only function to manipulate the facial image of the user to which the phone belongs, and not apply any manipulation to the other members of the group. If the group photo is shared with someone else who is a registered user of the present invention and is downloaded into that user's mobile phone, the software application will be able to identify and edit the image of that user, even though that user did not take the photograph.

Using such face recognition technology, the present invention is able to link and record the facial feature dimensions of the subject in its database, so it can be referenced to a photo database within the host service, for face recognition purposes. This will allow the subject's face and facial features to be recognised in a group photo or video, even if there are many individuals present.

In some instances, it may not be possible for the face recognition technology to recognise the user's face. In such circumstances, the user will be prompted to tag their face within the image. Once tagged, the system of the present invention will then be able to apply a beautification algorithm to the user's face in accordance with their preferences. In such circumstances, the image preference selections made by the user for tagged images will not be recorded in the database.

It will be established that machine learning methods require a large amount of data to develop a well-defined and useful conclusion. As such, because the host service of the present invention is able to employ methods that build a large database of an extensive variety of facial images with diversified attractiveness levels with each of those facial images also rated for aesthetic appeal by a diverse range of men and women of various ages and races, it is able to provide a means from which the host service machine learning engine can be trained. This means is built from an analysis of data about the consistencies between characteristics of the facial features found in images that are perceived as attractive and unattractive.

In this regard, the host service machine learning engine can analyse the following data: • Generated Images

• Profile Images;

• Camera Images and

• User Selfie's posted.

• 3 rd Party Posted Images.

• Accompanying the generated images, the user's;

• Dimensions of the head and facial features;

• Skin characteristics: Facial tone, colour and texture;

• Facial angle;

• Facial expression;

• Current and consequent image Preference selections;

Other user's preference selections of the same image

Variances in selection based on the environments the image was captured

• Current and consequent Facial attractiveness ratings (beauty scores);

Other user's facial attractiveness ratings of the same image

Variances in selection based on the environments the image was captured

• Pre-determined beautification levels altered and set at registration;

Consequently altered again at face recalibration

In performing this analysis, it is necessaiy to identify which image of the user is considered to be attractive and which image is considered unattractive. This can be done in the following manner:

• Selected as preferred image - as attractive

• Not selected as a preferred image - as less or unattractive

• Given a 5 star rating - High Beauty Score - as attractive

• Given a low star rating - Low beauty Score - as unattractive

As part of the analysis of the above images and data, the machine learning engine will measure seven facial features of individuals in images that have been found as attractive and non-attractive, as will be discussed in more detail below.

In order to identify attractiveness and measure facial beauty, being able to access a database of images that are rated in accordance with determined levels of attractiveness, is of considerable value. This enables the host service 1 1 to identify and gain an insight into the consistencies or key facial features inherent in attractive faces and discover rules that govern universal facial beauty. Without having images rated in this manner, there will be no easy way of identifying the dimensions or proportions of the seven facial features consistently present in a human face that is perceived as attractive. Further, in order to get conclusive results, it is also essential that the images are rated by an extensive number of individuals across a variety of sexes, ages and races.

In this regard, most existing databases of facial images that have been rated for attractiveness are limited and have considerable drawbacks in their usefulness across a broad range of users. The images that have been rated for attractiveness are not only few in number but also;

• Do not have sufficient variability in facial attractiveness;

• Come from limited ethnic groups and ages;

• Are confined to only frontal with neutral expressions;

• Have been captured in a constrained environment

• Have been rated for attractiveness by a small pool of individuals.

Similarly, methods that have been used in the past to rate facial images for the study of facial beauty have been very labour intensive, time consuming and costly. Such methods have traditionally involved establishing a group of people, often volunteers, to perform this rating exercise by scanning through the images and giving each image an attractiveness score. Thus, there have been limited studies conducted into this aspect of beauty and as a result, no conclusive results obtained on the definition of beauty.

Thus, it is considered that the systemised methods employed by the Host Service for having a large database of facial images rated in accordance with beauty preferences, will eliminate the limitations inherent with existing studies. By providing an image rating system, two systemised methods that form part of the normal functionality of the present Host Service application are possible. These are:

1 ) Image Preference Selections

As previously discussed, for images captured by the Host Service, a set of eight versions of the same image with varying beautification levels will be displayed. Each user will then be requested to select their preferred image. This will be done for:

• Profile images; • Camera images and

• User selfie's posted.

As each user will be selecting their image preferences based on which image they find the subject most attractive, the system of the present invention is able to analyse those preferred images selected to determine any correlation between those images and in doing so, establish rules governing attractiveness based on the dimensions & proportions of the facial features consistently present in those images.

2) Rating for attractiveness

For captured images, each user will also be requested to rate their preferred image selected for attractiveness. The attractiveness ratings will be governed by a star rating system between 1-5. In such a rating system, five stars are awarded to an image that is considered as being veiy attractive, with one star being not attractive. This rating system will be used for;

a. Profile Images;

b. User Selfie's posted and

c. 3 rd Party Posted Images.

The host service will then be able to analyse the ratings applied to images and find the correlation between those images to identify any rules that govern beauty;

• For images for which five stars have been given, the dimensions and proportions of the facial features consistently present in those images that make the subject attractive are able to be analysed and determined.

• For images for which 1 star have been given, the dimensions and proportions of the facial features consistently present in those images that make the subject unattractive are able to be analysed and determined.

The process forms part of the normal functionality of the present system and provides a simple, cost effective and automated system for generating beauty data. As the process forms part of the normal functionality of the system, the host service 1 1 does not require volunteers or paid staff to rate images for this purpose. The large database of individuals of various sexes, ages and races will be rating a large and varied database of facial images with diversified attractiveness levels. Such extensive and varied data captured by the host service 1 1 enables a deep insight into beauty and enables the identification of key facial features that define beauty, based on a large pool of people's opinion, consistently present in images that have been perceive as attractive.

It is well understood that there are six basic emotions experienced by humans: happiness, sadness, anger, fear, surprise, and disgust. The host service 1 1 is able to determine the presence of such emotions when the user is reviewing images, to track the users face for emotions in order to gauge the true emotional sentiment of the user towards the image when rating for preference or attractiveness. This is achieved by using a camera to view the user's facial features in real time as they are reviewing the images. This assists in ensuring accuracy of data collected and to gauge whether or not the user truly feels positive towards the image they have selected or not.

As the host service is able to map and track the facial features of the user's face, the system is able to detect, in real time, the micro expressions displayed by the user. The user's lips, mouth, eyebrows, eyelids, jaw, chin, forehead, pupils and overall facial expression of the face tell a story. By analysing these micro- expressions, the present invention is able to detect the true emotional sentiment of the user towards the image. For example if the micro-expressions displayed when viewing an image corresponds to happiness when selected, then the selection they have made would be considered as true.

In conventional analysis methods, facial images with a single specific facial expression were rated for attractiveness one by one. As a result definitive results on attractiveness could not be ascertained because the rating was for one image with one specific facial expression. As the facial features are effected by facial expressions, these expressions may vaiy the attractiveness level resulting in a lack of accuracy in the obtained results.

The methods adopted by the host service 1 1 will provide the ability to display multiple images of the same user. Thus the user who is rating an image for preference and attractiveness may be rating multiple images of the same individual at various facial expressions and angles. This means that with deep machine learning the system of the present invention is able to conduct a more reliable analysis and identify the hidden clues or facial feature dimensions and proportions consistently present in images that are found attractive and unattractive.

In an embodiment of the present invention, the Host Service 11 also comprises a social media platform in the form of a software application whereby users can access images saved on the host server database and rate those images in terms of attractiveness, via their smart phone or similar device. The host service is able to also utilise this source of facial image rating to create a large facial database for analysis and for increasing the effectiveness of the facial imaging processing technique.

As is shown, this system 40 is depicted in the flow chart of Fig. 7, through the Host Service 1 1 providing a social media service or attractiveness rating software application, the number of images available for analysis and comparison purposes can be significantly increased, which enables the imaging processing engine used by the Host service to be optimised and changeable as face types change over time.

This system 40 is able to take head/face analysis information from the individual user registration process 41 (as described above in relation to method 30) based on the user's own profile pictures. As discussed, facial images are captured as part of the user profile set up process. As this process forms part of the normal functionality of the system, it will be a pre-requisite to using the present invention. Capturing these images and saving them onto the facial database will be easy, cost effective and automated. Users will be willing to provide these images as it is a necessity in order to effectively register with the host service to beautify their photos.

Similarly any photographic posts 43, such as selfies or other photographs generated by a registered user to the social media site hosted by the Host Service 1 1 can also be captured and analysed by the Host Service system for this purpose. These images will typically be captured and uploaded to the Host Server database by the user. This process also forms part of the normal functionality of the Host Service. Although it will not be a requirement for the users to upload their "selfies", it is envisaged that users will upload their "selfies" onto the Host Service social media platform, as users of such platforms enjoy having their photos "liked" by other users. By posting "selfies" on the Host Service social media platform, in addition to users having their photos liked, they will also be able to gauge which variation of the "selfie" they have posted is most "liked" and the attractiveness score received for that image. So there is an incentive to the user to continue to post undated images on the Host Service's social media platform.

As well as registered user posts, any third party posts 42 made to the social media site can also be utilised by the Host Service to generate a large facial database of images. These images will be obtained from third party companies and uploaded onto the Host Service Database. Obtaining images from third party companies such as modelling agencies, beauty and fashion magazines and beauty pageant organisers who already have a database of images of beautiful individuals, would normally be a difficult task. However as the social media site of the present invention will provide an incentive and a service that would create a mass benefit and value to third parties, obtaining such images will become much easier.

Irrespective of the source 41 , 42, 43 of the picture, at step 44 the Host Service 1 1 is able to record the images on a facial database stored on the one or more servers 12, for further analysis.

In step 45, measurements of each of the photographs captured and stored on the Facial Database in step 44 is able to be performed to generate data for use by the facial imaging software of the present invention. These measurements may comprise various head measurements as well as measurements associated with seven facial features as will be discussed below. These measurements can be mapped and the various dimensions and proportions of skin colour, tone and texture of the individual faces can be extracted from the photographs, analysed and recorded in the facial database.

At step 46, the data captured and recorded is then able to be used as inputs into the beautification algorithms employed by the facial imaging software for use in ensuring that the facial imaging software is continually learning and updating as beauty preferences change. By creating such a facial database, the present invention will be able to develop a large quality database of facial images, facial feature dimensions and characteristics that will be extensive in terms of: numbers of images accessible; variability in facial attractiveness; variability in ages and ethnic groups; and facial angles and expressions; the environments the images have been captured in; and the number of images of individuals already classified as attractive.

Due to the methods mentioned above, the provision of such a facial database of facial dimensions and facial characteristics will provide a quality, unique and valuable database of facial images not available previously. Such a database will contain a large number of images of indi viduals of various sexes, ages and races with diversified attractiveness levels that will form the basis through which the self-learning facial imaging software will function.

As previously discussed, the head/face analysis engine that forms part of the facial imaging processing technology of the present invention system applied by the software application of the present invention is able accurately and precisely detect/track/trace and map the head and face by placing a minimum of 72- 101 mapping points or markers on the individual's facial features irrespective of whether the image is a still or moving picture. In a preferred form, the beautification engine of the present system and method is able to analyse the following seven main facial features:

1. Facial shape including cheeks and chin;

2. Forehead height

3. Eyebrow shape;

4. Eye size and inter-eye distance and pupil position

5. Nose shape

6. Lips including length and height and teeth if mouth open

7. Skin clarity, texture and colour.

As described above, all dimensions will be recorded for both frontal (Repose & Smiling) and lateral pose (Repose & Smiling) in a 90° of the in and out of plane rotation. In this regard, the position of all the seven facial features plus the user's teeth will be able to be tracked in real time through camera pre-image capture.

As discussed in relation to the method 30 of Fig. 3, the ability to track, map, extract, analyse and record the various features of a user's face is the key to identifying the level of attractiveness of a specific user prior to making any comparison as to how the user's specific facial geometry compares against an ideal beautiful facial profile. Thus, accurate facial recognition and facial feature detection is important not only for beautification purposes but also to help identify the distortion degree, if any, in the relative positions and dimensions of the facial features in a captured image due to optics.

As discussed in relation to method 30 of Fig. 3, prior to users of the present invention having access to the system of the present invention, they are required to register and create a profile. Through this process various details about the user is captured including, name, gender, DOB, ethnicity, etc. However, most importantly, the user is requested to capture their own images so that their face can be appropriately mapped. This is done by the user generating images captured at: front repose & smiling; lateral repose & smiling; and at 90 degree in/out plane rotation. Before the user captures their image it is critical to ensure that there is proper facial position and alignment with respect to the camera. If this is not the case, there is a risk of the camera capturing inaccurate measurements and applying incorrect modifications to the user's face. The manner in which this is addressed is depicted in Figure 8 as method 50.

In a preferred embodiment, this is achieved by way of the device providing audio and visual cues to the user to guide and instruct them to position the camera at the correct distance from the user's head and to position their head at the correct angles with required facial expressions. Once there is proper and correct alignment, the image of the user will be automatically captured.

In step 51 , at pre-image capture and in real time, a minimum of 72-101 mapping points are placed around the seven facial features and periphery of the head to enable the facial and head dimensions to be measured and extracted. At this time, data associated with the face colour, skin tone and texture of the user are also extracted and analysed in this pre-image capture step.

In step 52, this data that is extracted from the camera and analysed, and a search of the Host Service database is conducted to match the registered user Facial profile stored for that user in step 53. This step 53 is achieved by recognition that every face has numerous, distinguishable landmarks, that make up facial features. These include such features as: distance between the eyes; width of the nose; depth of the eye sockets; shape of the cheekbones; length of the jaw line. These features are unique and act as an identifier much like a fingerprint. In this step 53, having identified the overall facial structure of the user by mapping, measuring and extracting the landmark point of the user's face, a numerical code is created representing the face in the database. This code is then used for facial recognition puiposes and is saved on the Host Service database for searching.

In step 54, upon finding a facial match with the registered user facial profile, the dimensions extracted from the user's facial image are compared against the user's stored 2D or 3D model data.

In step 55, the facial data is fed and analysed by the Host Service machine learning engine to compare the data against an ideal beautiful face profile to determine those features that differ from the range considered beautiful.

In step 56 the Host Service facial beautification engine applies a beautification algorithm to correct to the image in accordance with perceived differences between measured features and those features considered beautiful. To take into consideration any effects of optical distortion, the user's facial angle, the distance of the face to the camera lens and image light, shadows, colour tones, etc is also analysed and measured.

As will be discussed in more detail below, the facial dimensions and proportions of each of the seven facial features in that specific image captured will be compared with the User's Facial Profile Data containing specific user's facial geometry and skin characteristics captured at registration. Any differences in the relati ve proportions of each of the facial features (facial geometry) including the face colour and texture are identified to enable the degree of optical distortion in facial features and characteristics to be identified and measured.

The method undertaken by the Host Service machine learning engine in step 55 to compare the user data against an ideal beautiful face profile is depicted in Fig. 10.

As previously discussed, an underlying principle of the present invention is that facial beauty is measureable and the key method in discovering universal beauty is to identify the consistencies of facial feature dimension and soft-biometrics found and inherent in what are universally considered to be attractive faces.

Although much research has been done in the past, conclusive results are difficult to obtain due to the availability of limited and poor quality database of facial images available and the inadequate methods that were used to sufficiently capture the aesthetic preferences of those rating the faces as attractive or not attractive.

The machine learning methods of the present invention provide a means for discovering a succinct set of rules that identify attractiveness and conclusively define universal facial beauty in 2 processes: ongoing analysis of data and images; and discovery of consistencies and rules to build an ideal beautiful face.

In step 60, the Host Service machine learning engine is able to access the stored database data and images collected from registered members. This information may be sourced directly from individual member profiles or the social media site(s) hosted by the Host Service, where the consistencies in the facial features are identified in images which have been assessed as attractive or not attractive. In this step the Host Service machine learning engine also accesses existing scientific research information. In this regard, there have been extensive studies conducted to discover universal beauty. Although these studies have not been conclusive, the studies have identified a variety of features and mathematical applications for assessing the beauty of an image. Such research is able to provide us with details on dimensions and proportions of the seven facial features identified in images consistently inherent in individual's faces that were found as attractive and unattractive. By understanding the measurements of each facial feature that was inherent in faces that were rated as attractive, the present Host Service machine learning engine can then compare these measurements to those of a registered user, find the differences, set new parameters for the facial features to correspond with the facial profile of an Ideal Beautiful Face (FPIBF) as further described below in relation to step 65, and then apply these new parameters to beautify the user's image.

In this step the Host Service machine learning engine can also access other scientific research, such as the Marquardt Mask; which is a male and female mask developed by researcher Stephen Marquardt. This research considers that by using such masks as a template and comparing the major anthropometric points of a face against the mask, the attractiveness level of an individual in the image can be measured. The closer the facial dimensions and proportions fit to the mask, the more attractive the individual will be perceived. So in order to beautify an image of a registered user of the present invention, that person's facial proportions and dimensions can be morphed to fit the mask as closely as possible. A variety of other studies have also been conducted in analysing facial beauty. In step 60, each of these methods can be accessed to generate as much data as possible to identify facial beauty. These models may include: The Golden Ratio; Vertical Thirds and Horizontal Fifths; Neoclassical Canons; Averageness; and Facial Symmetry.

In step 61 , the Host Service machine learning engine is able to constantly filter through the ever-expanding database of information and analyse and measure features that are considered both unattractive and attractive. By utilising each user profile, the Host Service machine learning engine can simply obtain this information as the user's selected profile image has been considered as attractive and those images rejected by the user during the registration process have been rated as less attractive. Similarly, the Host Service machine learning engine is able to simply analyse and measure all features considered to be attractive and not attractive from the camera images taken, profile images stored, pictures posted by third parties and selfies posted by the individual members to generate a constantly updating database of measurements and features from the various camera images taken, profile images stored. In step 62, this information is analysed further to identify and discover a succinct set of consistencies inherent in the 7 facial features dimensions and skin characteristics of those faces that have been found as attractive and unattractive. This step includes taking measurements such as the geometry of the face and head in the image including; facial shape - including chicks and chin; forehead height; eyebrow shape - including length, width and height; eye size and inter- eye distance and pupil position; nose shape - including length and width; and lips and teeth- including length and height. Other measurements may also be taken, such as; ear length to interocular distance; ear length to nose width; mid- eye distance to interocular distance; mid-eye distance to nose width; mouth width to interocular distance; lips-chin distance to interocular distance; lips-chin distance to nose width; interocular distance to eye fissure width; interocular distance to lip height; nose width to eye fissure width; nose width to lip height; eye fissure width to nose-mouth distance; lip height to nose-mouth distance; length of face to width of face; nose-chin distance to lip-chin distance; and nose width to nose-mouth distance.

As automatic and accurate facial feature mapping is critical for extraction of facial features, facial recognition, facial beauty and expression analysis, the Host Service machine learning engine will use a variety of existing software applications to perform this function. Extracting the geometry of the face will be based on the positions of the face region and the pupils and eyes. The face is mapped by placing 72-101 landmarks (mapping points) around the periphery of the head and facial features. The landmarks are extracted, facial measurements are calculated and finally the geometry of the face is generated. The Facial Characteristics in the image including; facial colour, tone, clarity and texture, eye colour; lip colour; and teeth colour are extracted. For extracting these facial characteristics a BLBP software model and PCANet software model may be employed.

In step 63, the measurements taken are statistically analysed to discover rules and or relationships within the data.

In step 64 these rules or relationships can be used to define the dimensions or proportions in the seven facial features and skin colour, tone and texture that are required to be present in an ideal beautiful face in accordance with the collected data. This can be determined across all environments and camera angles.

In step 65, using the data of consistencies identified and the results of the experiments derived from the conventional methods of facial beauty analysis, the Host Service machine learning engine will build a set of rules that will represent a Facial Profile of an Ideal Beautiful Face (FPIBF). This will define the dimensions of the head and each of the facial features and skin characteristic that are innate to a FPIBF and will be used as pre-determined target and a basis to predict facial attractiveness i.e the closer the user's face fits to that of a FPIBF, the more attractive they would be considered. So by understanding what facial feature dimensions and proportions are perceived as attractive and then by subtly moiphing the user's face so its fits as closely as possible to those facial feature dimensions and proportions, we will be able to beautify and create flattering images of the user.

In step 66, this data will form the mathematical basis from which a beautification algorithm is automatically developed and then used by the Host Service facial beautification engine to apply correction to the image in accordance with perceived differences between measured features and those features considered beautiful.

Having defined the Facial Profile of an Ideal Beautiful Face (FPIBF) through machine learning, the Host Service Beautification Engine applies the moiphing (beautification) of the image, in real time, in order to enhance the facial appeal of the user's face in the source image. The manner in which this is done will be described below in relation to Fig. 1 1 and it is generally achieved by subtly moiphing the image to a level so that it will perceived as attractive (irrespective of viewer), whilst maintaining a natural appearance that is as close to the original source image.

In step 70, the dimensions and proportions of each of the seven facial features in the image taken is compared generally to the facial features of the FPIBF obtained from the Host Service machine learning engine to determine the level of moiphing or degree of changes to be performed to the user's facial geometiy and skin characteristics to fit closely with the FPIBF.

In step 71 the beautification algorithm is applied to the facial dimensions and proportions of the user's face in the image and the distances between the variety of facial feature locations are extracted and the differences in the relative positions and dimensions of each of the facial features (facial geometry) including in the face colour and texture between the FPIBF are identified. New target landmark/mapping points are identified and degree adjustments to be made to the image are calculated and set. The new target is set based on the FPIBF, so its fits as close as possible to it, while still maintaining a natural appearance and unmistakable similarity to the original user's source image. Depending upon the level of morphing required, the morphing could be applied by adjusting the dimensions of the seven facial features against the FPIBF (step 72) or by adjusting the user's skin tone, colour and texture against the FPIBF (step 73) or a combination of both.

In step 72, the user's image, represented by the User's Facial Profile Data, is beautified by; morphing the geometry of the face and head in the image; and morphing the facial feature dimensions and distances towards the new predetermined target. This will include changing some or all of the dimensions or proportions of the; facial shape - including cheeks and chin; forehead height; eyebrow shape - including length, width and height; eye size and inter-eye distance and pupil position; nose shape - including length and width; and lips and teeth- including length and height. The percentage by which each of the facial feature dimensions and distances are morphed is dependent on the differences identified between the user's facial dimensions and proportions in the specific image captured and the FPIBF. The larger the difference then the larger the percentage change may be performed.

In step 73, the Facial Characteristics in the image are also able to be morphed to better fit the FPIBF. Just as the facial feature landmark points have been identified through facial mapping, the facial skin region is also able to be identified. The facial texture, tone and colour will be improved by using multilevel median filtering to: remove imperfections in the skin such as scars and acne; improve skin colour, texture; and whiten teeth. Optical rectification may be done simultaneously with facial beautification/morphing, or it may be performed separately.

In step 74, a number of beautified images with varying beautification percentage levels are presented to the user via their camera after the image has been captured. In one embodiment, eight images for selection will be displayed to the user. These images will consist of: 1 = Original, 1 = 35 % morphing, 1 = 70% morphing, 1 = 100% morphing. All these 4 images will also be displayed as a Mirror Image. It will be appreciated that other percentage morphing images may also be displayed and the number of images presented to the user in step 74 may vaiy.

In step 75, the user is able to manually select those images considered most favourite to least favourite. The data derived by the user simply selecting their preferences (favourite image) is critical and helps understand the perception of beauty and adds to improving or fine tuning the Facial Profile of an Ideal Beautiful Face (FPIBF) and ultimately defining universal beauty. The user selects their preferences by numbering their favourites from most preferred (1) to least preferred (8).

In step 76, the images are then saved in the user's profile with the Host Server and their preference data is saved and fed to the Host Server machine learning engine for further analysis. In this regard, the user's selection forms part of the ongoing fine tuning of the Facial Profile of an Ideal Beautiful Face (FPIBF).

In step 77, the pre-determined Beautification algorithm, specific to the user's preferences, is updated and set and future images captured are beautified according to this saved algorithm, by the user simply selecting their preferences (favourite image) is critical and helps us better understand the perception of beauty and adds to improving or fine tuning the Facial Profile of an Ideal Beautiful Face (FPIBF) and ultimately defining universal beauty. All data is fed to the Machine learning Engine and analysed. Their selection forms part of the ongoing fine tuning of the Facial Profile of an Ideal Beautiful Face (FPI BF). If their current selection is inconsistent or varies from their last selection, the predetermined Beautification algorithm, specific to the user's preferences, is updated and set and future images captured are beautified accordingly.

It will be appreciated that the optical distortion of the user's facial features and skin colour and texture may occur in images that are captured depending on factors such as; distance of the user's face to the camera lens; angle of the user's face at which the image was captured; the facial expressions and the environment under which the image was captured. Some or all of these situations may have a substantial optical distortion effect in the relative positions and dimensions of the user's facial features and skin colour and texture including creating an allusion that the ears are of different sizes, the nose appearing bulbous, the head or chin appearing pointed and the skin colour and texture appearing uneven and coarse.

The present invention seeks to address this during the pre-image capture step, as referred to in Fig. 8.

At step 51, during Pre-image capture the user's facial features and dimension of the head is instantly, and in real time, tracked and mapping points are placed around the facial features of interest, as previously discussed.

At step 53, in real time, the measurements of the facial features are extracted and matched to the User Facial Profile Data captured at registration. Once matched, that specific user's face is recognised. There may be circumstances where the technology will be unable to recognise the subject's face and therefore as a result mapping and beautifying the subject's face will not be possible. The circumstances under which this may happen are; the resolution of the image is poor or out of focus; poor lighting and excessive shadowing where the key facial markers cannot be automatically identified; the user is not present in the photo; there are no individuals present in the photo; the subject's face cannot be detected. In such situations, the subject will be prompted to tag their face in the image. Once tagged the beautification will be applied to the tagged face. By allowing subjects to tag their face however, potentially may corrupt or skew the host service database as a result of the subject tagging someone else's face. So for the purposes of maintaining an accurate host service database, the image preference selections made by the subject for the tagged photo will not be recorded on the host service database.

Rather than creating a 2D or 3D model of the head and face in step 54, the details captured about the user's face in step 51 are compared with the User's Facial Profile Data to identify the degree of optical distortion. The present invention will then compare the dimensions and proportions of each of the 7 facial features to identify the distortion level, if any, in the relative positions and dimensions of the facial features including changes in the face colour and texture.

Once the distortion level is ascertained, it is fed to the Host Service Beautification Engine where it is then determined the level of changes or morphing to the user's face that needs to be performed to correct the optical distortion.

During the morphing process of Fig. 1 1 , the Host Service Beautification Engine is able to understand the applicable distortion level, and take steps to correct such distortion back to an original or improved (beautified) proportions and dimensions. In this regard, in step 71 when the algorithm is applied, any optical distortion of the image is automatically corrected by morphing the dimensions and proportions of the facial features in the image; and applying various optical filters to the image using multi-level median filtering.

The Beautification Engine of the present invention is able to calculate the degree of adjustments to make and automatically, instantly and in real time apply an algorithm to beautify the subject's face. This is done initially using user feedback and scientific research as its basis, and as further data is gathered on the definition of beauty, it will include and learn from this data to further develop, improve or fine tune the beautification algorithm. Beautification will be able to be performed on males and female of most ages and races in Repose and Smiling facial expressions and at various facial angles.

As previously discussed, through using intelligent machine learning technology and feeding the system with more data, the software application is able to learn, refine, and automatically develop its own improved algorithms that will ultimately produce more accurate beautified images of users. During the initial calibration of the camera this is achieved by recording which of the set of provided manipulated photographs the user prefers, and recording the preferences. As the ideal of the present invention is to automatically provide pictures that have been beautified to a level of satisfaction by the user, this should be done without requiring the user to make any further selections.

In this regard the system may continue to provide multiple preference options to the user until such time as the user's preferences become learned by the software system. This may occur when the preferences are consistent and the system can track the level of consistency to be above a predetermined target. At such a time, providing multiple options for the user to select from is no longer necessaiy and the system may function as a conventional camera with the beautification engine functioning in the background.

Nevertheless, over time preferences may change as in individual ages and gains/loses body weight. As such, the beautification process applied automatically by the software system may no longer be preferred by the user. In such circumstances, the user may be able to manually override the beautification process to alter the image manipulation. Once this has been triggered, the learning process will be reset and the user will be provided with multiple options to choose from until the level of consistence is once again detected.

Each time a user saves or changes a preference, the data associated with the preference is saved and stored in the host service 1 1 or in the device 16 to be repeated and applied at the next opportunity.

To further improve the data collection, a survey may be provided by the host service 1 1 for completion by one or more of the registered users. Such a survey may provide users with multiple images and request each user to rate images by selecting their preference in return for credits, or similar forms of an incentive.

To provide a further means for collecting data, the host service 1 1 may provide a social network service to facilitate interaction between members. In such a situation, a user (User 1) may provide a rating of a photograph and will have an option to meet the person (User 2) whose photo they have just rated. In the image selection process every time user 1 selects an image from the various options they will be presented with, they will also be given an option to click on a like button if they want to meet that individual User 2. Thus, if User 1 clicks on like, the other individual "User 2" will receive a notification that User 1 liked User 2's photo and they would like to meet them. User 2 then also has to rate User l 's photo in order to qualify to meet. If User 1 and User 2 like each other's photos, they will be introduced to each other. Introductions will only be possible through the host service if photos are rated and both users like each other's photos. Each user will also be able to see how many people liked their picture.

As previously discussed, due to the various conditions and environments under which a photograph may be taken, this typically has a direct effect on the image of the person in that photograph. Poor photographic conditions generally produce poor personal images. However, as part of the machine learning capabilities of the present invention, the software application associated with the present invention will analyse all aspects of photo, including such aspects as the lighting, contrast, colour, shadows, facial angles, proximity of the image to the camera lens. The software application will take these aspects into consideration during the photo manipulation step and will learn from the correlation of the beautified image the user selects of themselves in the various conditions, circumstances and environments to define:

1. The level, degree and type of beautification was most liked by the user specific to those varying lighting conditions and environments; and

2. The level, degree and type of beautification to apply to an image in the future, specific to meet such variable conditions & environments, that will result in a perfectly beautified image of the user.

The system of the present invention may comprise an embodiment that enables users to save their photographs and videos to a storage library that may be hosted by the host service 1 1. The user can then further manipulate the photograph within the library with each manipulation being recorded by the system to facilitate further machine learning capabilities about the user's preferences.

In an embodiment of the present invention, the device and system may be able to collect micro expressions from the user. This will include real time analysis of facial expressions of the user during selection of their photograph options. These micro facial expressions can be used to ascertain the emotions generated by the user after viewing their image which can be used to more actively gauge the user's preferences. Similarly, when other users are required to rate photographs of other users, their micro expressions can be used to provide an indication of whether the user likes or dislikes a photo.

It will be appreciated that the present invention could also be applied in relation to the application of makeup. As the art of make-up application is quite defined, many women, especially young women applying makeup for the first time, do not truly know the correct method of applying makeup. Many apply too much make up or not enough, or apply makeup in the wrong places resulting in an application that does not achieve the desired effect or may be perceived as unattractive by an observer. There are currently many software applications in the market that render makeup automatically to photos through a selection process. There are also other applications that provide access to videos and various tutorials that teach individuals how to apply makeup correctly. However, none of the available systems provide a means by which users are guided to apply makeup correctly and most effectively to create the most desired effect that will be perceived positively by both the person applying the makeup and an observer.

In accordance with an embodiment of the present invention, this can be achieved by the following process:

1. Taking a photographic image of a user's face;

2. Analysing the user's facial features and generating a 2D or 3D model of the user's head and face based on the analysed features in accordance with the previously discussed methods;

3. Assessing the user's face and comparing the generated 2D or 3D model against a Facial Profile of an Ideal Beautiful Face (FPIBF) to determine those features and aspects of the user's face that require modification through the application of make-up in accordance with the previously described methods.

4. Based on the assessment of all aspects of the individuals face against the FPIBF, making various recommendation on the style of makeup that would most suit the user's face to align those features of the user's face to the equivalent features of the FPIBF features;

5. Such an application of the present invention will also make

recommendations on aspects such as: the type of make up to use; brands of makeup to use; and colours of makeup, that will most suit their face; 6. The software application may then automatically render the individuals face, as supplied in the photograph of step 1, with makeup in various recommended styles and display it on the screen as a still image. The user will then have an option to select the look they most prefer. Once a selection is made, the person's face will be rendered with makeup and displayed on the screen in real time. They will be able to move their face whilst still having their faced rendered with makeup in real time.

7. Once a final choice is made, the software application will then display two images of the user in real time next to each other on the one screen. One image will be with the makeup applied and the other will be an image of the user with no makeup. In this regard, if the user applies their makeup correctly, both images in real time should look the same i.e. on the screen will be displayed is what they look like now and what the expected result would be after they have completed physically applying the makeup. All in real time.

8. The software application will then provide step by step instructions and guidance on how to physically apply the make up to achieve the beautified simulated vision of them as displayed on the screen. The instructions will be in visual and audio formats and will be in real time. It will instruct the user as to what make up to use, colours, types, where and how to apply it. Visually it will display directions on the screen with arrows, highlights, and the like.

9. If any mistakes make-up application are made by the user, the software application will detect any such mistakes. Once a mistake is detected, a notification will then be triggered and displayed on the screen specifying the mistake made and suggesting option to correct.

10. Once the user physically and successfully completes applying makeup, both the simulated version of the image (the target) displayed at the beginning of the makeup application process and the image of the user displayed on the screen in real time should look the same.

It will be appreciated that the system and method of the present invention functions to collect and identify major anthropometric features associated with a user's face, so as to apply a digital alteration to a user's face image as the photograph is taken in real time, without the need for multiple interaction on behalf of the individual to manipulate their image. By providing a software approach with the camera that is able to identify major anthropometric points on the individual's face as the photograph is taken and to compare these points against an ideal human face, it is possible for the software application to identify those features which diverge from the ideal location or position and take actions to digitally alter and correct such diversions in forming the original photograph.

Throughout the specification and claims the word "comprise" and its derivatives are intended to have an inclusive rather than exclusive meaning unless the contrary is expressly stated or the context requires otherwise. That is, the word "comprise" and its derivatives will be taken to indicate the inclusion of not only the listed components, steps or features that it directly references, but also other components, steps or features not specifically listed, unless the contrary is expressly stated or the context requires otherwise.

It will be appreciated by those skilled in the art that many modifications and variations may be made to the methods of the invention described herein without departing from the spirit and scope of the invention.