Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR IMAGE-TO-TEXT AND TEXT-TO-IMAGE ASSOCIATION
Document Type and Number:
WIPO Patent Application WO/2012/104830
Kind Code:
A1
Abstract:
A computerized system for classifying facial images of persons including a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.

Inventors:
TAIGMAN YANIV (IL)
HIRSCH GIL (IL)
SHOCHAT EDEN (IL)
Application Number:
PCT/IL2011/000287
Publication Date:
August 09, 2012
Filing Date:
March 31, 2011
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
VIZI LABS INC (US)
TAIGMAN YANIV (IL)
HIRSCH GIL (IL)
SHOCHAT EDEN (IL)
International Classes:
G06F17/30
Other References:
No relevant documents disclosed
Attorney, Agent or Firm:
SANFORD T. COLB & CO. et al. (Rehovot, IL)
Download PDF:
Claims:
C L A I M S

1. A computerized system for classifying facial images of persons comprising:

a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of said facial image, said values being represented by adjectives; and

a computerized classifier which classifies said facial image in accordance with said plural ones of said discrete facial attributes.

2. A computerized system for classifying facial images of persons according to claim 1 and wherein said computerized facial attribute-wise evaluator comprises:

a database comprising a multiplicity of stored values corresponding to a plurality of facial images, each of said facial images having at least some of said plurality of discrete facial attributes, at least some of said discrete facial attributes having said values, represented by adjectives, associated therewith. 3. A computerized system for classifying facial images of persons according to claim 2 and also comprising:

facial attribute statistic reporting functionality providing statistical information derived from said multiplicity of stored values. 4. A computerized system for classifying facial images of persons according to claim 1 and wherein said computerized facial attribute-wise evaluator comprises:

a database comprising a multiplicity of stored facial images, and a multiplicity of stored values, each of said stored facial images having at least some of said plurality of discrete facial attributes, at least some of said discrete facial attributes having said values, represented by adjectives, associated therewith; and an adjective-based comparator, comparing a facial image with said multiplicity of stored facial images by comparing said plurality of discrete facial attributes of said facial image, attribute- and adjective- wise with said multiplicity of stored facial images.

5. A computerized system for classifying facial images of persons according to claim 4 and wherein said adjective-based comparator queries said database in an adjective-wise manner. 6. A computerized system for classifying facial images of persons according to any of claims 1 - 5 and also comprising a computerized identifier operative in response to an output from said computerized classifier for identifying at least one stored facial image corresponding to said output. 7. A computerized system for classifying facial images of persons according to claim 6 and wherein said computerized identifier is operative for generating a ranked list of stored facial images corresponding to said output.

8. A computerized system for classifying facial images of persons according to any of claims 1 - 7 and also comprising a social network interface for making available information from a social network to said computerized facial image attribute- wise evaluator.

9. A computerized system for classifying facial images of persons according to any of claims 1 - 8 and also comprising face model generation functionality operative to generate a face model corresponding to said facial image.

10. A computerized system for classifying facial images of persons according to claims 6 and 9 and where said computerized identifier employs said face model.

11. A computerized method for classifying facial images of persons comprising:

assigning values representing a facial image to plural ones of discrete facial attributes of said facial image, said values being represented by adjectives; and classifying said facial image in accordance with said plural ones of said discrete facial attributes.

12. A computerized method for classifying facial images of persons according to claim 1 1 and wherein each of said facial images has at least some of said plurality of discrete facial attributes and at least some of said discrete facial attributes have said values, represented by adjectives, associated therewith.

13.. A computerized method for classifying facial images of persons according to claim 12 and also comprising:

providing statistical information derived from said multiplicity of stored values.

14. A computerized method for classifying facial images of persons according to claim 1 1 and wherein:

each of said stored facial images has at least some of said plurality of discrete facial attributes, and at least some of said discrete facial attributes have said values, represented by adjectives, associated therewith; and also comprising:

comparing a facial image with a multiplicity of stored facial images by comparing said plurality of discrete facial attributes of said facial image, attribute- and adjective- wise with said multiplicity of stored facial images.

15. A computerized method for classifying facial images of persons according to claim 14 and wherein said comparing queries a database in an adjective- wise manner.

16. A computerized method for classifying facial images of persons according to any of claims 11 - 15 and also comprising identifying at least one stored facial image corresponding to an output of said classifying. 17. A computerized method for classifying facial images of persons according to claim 16 and wherein said identifying is operative for generating a ranked list of stored facial images corresponding to said output.

18. A computerized method for classifying facial images of persons according to any of claims 11 - 17 and also comprising making available information from a social network to said computerized facial image attribute-wise evaluator.

19. A computerized method for classifying facial images of persons according to any of claims 11— 18 and also comprising face model generation operative to generate a face model corresponding to said facial image.

20. A computerized method for classifying facial images of persons according to claims 16 and 19 and where said identifying employs said face model. 21. A system for registration of persons in a place comprising:

a facial image/person identification acquisition subsystem acquiring at least one facial image and at least one item of personal identification of a person; and a computerized subsystem receiving said at least one facial image and said at least one item of personal identification of said person, said computerized subsystem comprising:

face model generation functionality operative to generate a face model corresponding to said at least one facial image; and image-to-attributes mapping functionality operative to assign values represented by adjectives to a plurality of facial attributes of said facial image; and

a database which stores information and said values of facial attributes for a plurality of said persons.

22. A system for registration of persons in a place according to claim 21 and wherein said computerized subsystem also comprises:

attributes-to-image mapping functionality operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image and thereby to identify a particular individual utilizing said face model.

23. A system for registration of persons in a place according to claim 21 and wherein said computerized subsystem also comprises:

a value combiner is operative to combine said face model and said collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

24. A system for registration of persons in a place according to either of claims 22 and 23 and also comprising:

a subsequent facial image acquisition subsystem acquiring at least one facial image and supplying it to said computerized subsystem; and wherein

said computerized subsystem is operative to:

create a face model corresponding to said subsequent facial image;

assign values represented by adjectives to a plurality of facial attributes of said subsequent facial image; and identify a corresponding stored facial image and thereby said subsequent facial image as a particular individual, at least one item of personal identification relating to whom is stored in said database. 25. A system for registration of persons in a place according to claims 23 and 24 wherein said value combiner is employed to combine said face model and said collection of values corresponding to said subsequent facial image and thereby to identify said particular individual.

26. A system for registration of persons in a place according to any of claims 21 - 25 and wherein said at least one item of personal identification of said person is obtained from pre-registration data.

27. A system for registration of persons in a place according to any of claims 21 - 26 and also comprising a social network interface for making available information · from a social network to said computerized subsystem.

28. A system for registration of persons in a place according to any of claims 24 - 27 and wherein said facial image/person identification acquisition subsystem is operative for acquiring at least one facial image and at least one item of personal identification of a person other than a person interacting with said subsystem.

29. A system for registration of persons in a place according to any of claims 21 - 27 and wherein said facial image/person identification acquisition subsystem is operative for acquiring at least one facial image of an otherwise unidentified person other than a person interacting with said subsystem.

30. A system for registration of persons in a place according to any of claims 21 - 29 which is embodied in:

a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of said facial image, said values being represented by adjectives; and

a computerized classifier which classifies said facial image in accordance with said plural ones of said discrete facial attributes.

31. A system for recognizing repeated presence of persons in a place comprising: a facial image/person identification acquisition subsystem acquiring at least one facial image of a person; and

a computerized subsystem receiving said at least one facial image, said computerized subsystem comprising:

face model generation functionality operative to generate a face model corresponding to said at least one facial image; and image-to-attributes mapping functionality operative to assign values represented by adjectives to a plurality of facial attributes of said facial image; and

a database which stores information and said values of facial attributes for a plurality of said persons.

32. A system for recognizing repeated presence of persons in a place according to claim 31 and wherein said computerized subsystem also comprises:

attributes-to-image mapping functionality operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image associated with a particular individual, utilizing said face model. 33. A system for recognizing repeated presence of persons in a place according to claim 31 and wherein said computerized subsystem also comprises:

a value combiner is operative to combine said face model and said collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values.

34. A system for recognizing repeated presence of persons in a place according to either of claims 32 and 33 and also comprising:

a subsequent facial image acquisition subsystem acquiring at least one facial image and supplying it to said computerized subsystem; and wherein

said computerized subsystem is operative to: create a face model corresponding to said subsequent facial image;

assign values represented by adjectives to a plurality of facial attributes of said subsequent facial image; and identify a corresponding stored facial image and thereby said subsequent facial image as being that of a particular individual, for recognizing repeated presence of that particular person.

35. A system for recognizing repeated presence of persons in a place according to claims 33 and 34 wherein said value combiner is employed to combine said face model and said collection of values corresponding to said subsequent facial image thereby to recognize repeated presence of a person.

36. A system for recognizing repeated presence of persons in a place according to any of claims 31 - 36 and also comprising:

a repeat presence statistics generator employing said face models and said collections of values for generate attribute-wise statistics regarding persons repeatedly present at a place. 37. A system for recognizing repeated presence of persons in a place according to any of claims 31 - 36 and also comprising a social network interface for making available information from a social network to said computerized subsystem.

38. A system for recognizing repeated presence of persons in a place according to any of claims 31 - 37 and wherein said facial image/person identification acquisition subsystem is operative for acquiring at least one facial image and at least one item of personal identification of a person other than a person interacting with said subsystem. 39. A system for recognizing repeated presence of persons in a place according to any of claims 31 - 37 and wherein said facial image/person identification acquisition subsystem is operative for acquiring at least one facial image of an otherwise unidentified person other than a person interacting with said subsystem.

40. A system for recognizing repeated presence of persons in a place according to any of claims 31 - 39 which is embodied in:

a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of said facial image, said values being represented by adjectives; and

a computerized classifier which classifies said facial image in accordance with said plural ones of said discrete facial attributes.

41. A method for generating a computerized facial image attribute- wise evaluator, capable of assigning values, each represented by an adjective, to plural ones of discrete facial attributes of a facial image, the method comprising:

gathering a multiplicity of facial images, each having at least one facial image attribute, characterized by an adjective, associated therewith; and

generating a function operative to receive a facial image to be evaluated and to utilize results of said gathering for assigning values to plural ones of discrete facial attributes of said facial image to be evaluated, said values being represented by adjectives.

42. A method for generating a computerized facial image attribute- wise evaluator according to claim 41 and wherein said gathering comprises:

collecting a multiplicity of facial images, each having at least one facial image attribute, characterized by an adjective, associated therewith from publicly available sources; and

employing crowdsourcing to enhance correspondence between adjectives and facial attributes appearing in said multiplicity of facial images. 43. A method for generating a computerized facial image attribute-wise evaluator according to claim 42 and wherein said crowdsourcing comprises: employing multiple persons who view ones of said multiplicity of facial images and said adjectives and indicate their views as to the degree of correspondence between said adjectives and said facial attributes in said ones of said multiplicity of images.

44. A method for generating a computerized facial image attribute-wise evaluator according to any of claims 41 - 43 and wherein said values are numerical values. 45. A system for recognizing user reaction to at least one stimulus comprising:

a computerized facial image attribute-wise evaluator, assigning values representing a facial image obtained at a time corresponding to user reaction to a stimulus to plural ones of discrete facial attributes of said facial image, said values being represented by adjectives; and

a computerized classifier which classifies said facial image in accordance with said plural ones of said discrete facial attributes.

46. A system for recognizing user reaction to at least one stimulus according to claim 45 and also comprising a computerized attribute comparator comparing said plural ones of said discrete facial attributes prior to and following application of said at least one stimulus.

47. A method for recognizing user reaction to at least one stimulus comprising:

assigning values representing a facial image obtained at a time corresponding to user reaction to a stimulus to plural ones of discrete facial attributes of said facial image, said values being represented by adjectives; and

classifying said facial image in accordance with said plural ones of said discrete facial attributes.

48. A method for recognizing user reaction to at least one stimulus according to claim 45 and also comprising comparing said plural ones of said discrete facial attributes prior to and following application of said at least one stimulus.

49. A computerized system for classifying persons comprising:

a relationship coefficient generator which generates relationship coefficients representing the probability of a person to be in a particular context at a particular time; and

a computerized classifier which classifies said person in accordance with said plural ones of said relationship coefficients.

50. A computerized system for classifying persons according to claim 49 and wherein said context is one of a geographic location and an event.

51. A computerized system for classifying persons according to either of claims 49 and 50 and wherein said relationship coefficients comprise a value and a decay function.

52. A computerized system for classifying persons according to claim 51 and wherein said decay function is a linear function.

53. A computerized system for classifying persons according to claim 51 and wherein said decay function is an exponential function.

54. A computerized system for classifying persons according to any of claims 49 - 53 and wherein said context is one of a hierarchy of hierarchical contexts.

55. A computerized system for classifying persons according to claim 51 and wherein relationship coefficients of contexts of a hierarchy of contexts are interdependent.

56. A computerized system for classifying persons according to any of claims 49 - 55 and wherein said relationship coefficient generator is operative in a case where multiple persons have been together in at least a first context to generate interdependent relationship coefficients between said multiple persons in a second context.

57. A computerized system for classifying persons according to claim 49 and also comprising:

a computerized classifier which classifies facial images in accordance with plural ones of discrete facial attributes.

Description:
SYSTEMS AND METHODS FOR IMAGE-TO-TEXT AND TEXT-TO-IMAGE

ASSOCIATION

REFERENCE TO RELATED APPLICATIONS

Reference is made to U.S. Provisional Patent Application Serial No, 61/439,021 , filed February 3, 201 1 and entitled "SYSTEMS AND METHODS FOR IMAGE-TO-TEXT AND TEXT-TO-IMAGE ASSOCIATION", the disclosure of which is hereby incorporated by reference and priority of which is hereby claimed pursuant to 37 CFR 1.78(a) (4) and (5)(i).

Reference is also made to the following patent application, owned by assignee, the disclosure of which is hereby incorporated by reference:

U.S. Patent Application Serial No.: 12/922,984.

FIELD OF THE INVENTION The present invention relates generally to image-to-text and text-to- image association.

BACKGROUND OF THE INVENTION The following patents and patent publications are believed to represent the current state of the art:

US Patent Nos.: 4,926,491; 5,164,992; 5,963,670; 6,292,575; 6,301,370; 6,819,783; 6,944,319; 6,990,217; 7,274,822 and 7,295,687; and

US Published Patent Application Nos.: 2006/0253491 ; 2007/0237355 and 2009/0210491. SUMMARY OF THE INVENTION

The present invention seeks to provide improved systems and methodologies for image-to-text and text-to-image association. There is thus provided in accordance with a preferred embodiment of the present invention a computerized system for classifying facial images of persons including a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.

In accordance with a preferred embodiment of the present invention, the computerized facial attribute-wise evaluator includes a database including a multiplicity of stored values corresponding to a plurality of facial images, each of the facial images having at least some of the plurality of discrete facial attributes, at least some of the discrete facial attributes having the values, represented by adjectives, associated therewith.

Preferably, the system also includes facial attribute statistic reporting functionality providing statistical information derived from the multiplicity of stored values.

Preferably, the computerized facial attribute-wise evaluator includes a database including a multiplicity of stored facial images, and a multiplicity of stored values, each of the stored facial images having at least some of the plurality of discrete facial attributes, at least some of the discrete facial attributes having the values, represented by adjectives, associated therewith, and an adjective-based comparator, comparing a facial image with the multiplicity of stored facial images by comparing the plurality of discrete facial attributes of the facial image, attribute- and adjective- wise with the multiplicity of stored facial images. Preferably, the adjective-based comparator queries the database in an adjective-wise manner.

Preferably, the system also includes a computerized identifier operative in response to an output from the computerized classifier for identifying at least one stored facial image corresponding to the output. Preferably, the computerized identifier is operative for generating a ranked list of stored facial images corresponding to said output.

Preferably, the system also includes a social network interface for making available information from a social network to the computerized facial image attribute-wise evaluator. Preferably, the system also includes face model generation functionality operative to generate a face model corresponding to the facial image. Preferably, the computerized identifier employs the face model.

There is also provided in accordance with another preferred embodiment of the present invention a computerized method for classifying facial images of persons including assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives, and classifying the facial image in accordance with the plural ones of the discrete facial attributes.

In accordance with a preferred embodiment of the present invention, each of the facial images has at least some of the plurality of discrete facial attributes and at least some of the discrete facial attributes have the values, represented by adjectives, associated therewith. Preferably, the method also includes providing statistical information derived from the multiplicity of stored values.

Preferably, each of the stored facial images has at least some of the plurality of discrete facial attributes, and at least some of the discrete facial attributes have the values, represented by adjectives, associated therewith, and the method preferably also includes comparing a facial image with a multiplicity of stored facial images by comparing the plurality of discrete facial attributes of the facial image, attribute- and adjective- wise with the multiplicity of stored facial images. Preferably, the comparing queries a database in an adjective- wise manner.

Preferably, the method also includes identifying at least one stored facial image corresponding to an output of the classifying. Preferably, the identifying is operative for generating a ranked list of stored facial images corresponding to the output. Preferably, the method also includes making available information from a social network to the computerized facial image attribute-wise evaluator. Preferably, the method also includes face model generation operative to generate a face model corresponding to the facial image. Preferably, the identifying employs the face model. There is further provided in accordance with yet another preferred embodiment of the present invention a system for registration of persons in a place including a facial image/person identification acquisition subsystem acquiring at least one facial image and at least one item of personal identification of a person, and a computerized subsystem receiving the at least one facial image and the at least one item of personal identification of the person, the computerized subsystem including face model generation functionality operative to generate a face model corresponding to the at least one facial image and image-to-attributes mapping functionality operative to assign values represented by adjectives to a plurality of facial attributes of the facial image, and a database which stores information and the values of facial attributes for a plurality of the persons.

Preferably, the system also includes attributes-to-image mapping functionality operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image and thereby to identify a particular individual utilizing the face model. Preferably, the computerized subsystem also includes a value combiner is operative to combine the face model and the collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

Preferably, the system also includes a subsequent facial image acquisition subsystem acquiring at least one facial image and supplying it to the computerized subsystem, and the computerized subsystem is preferably operative to create a face model corresponding to the subsequent facial image, assign values represented by adjectives to a plurality of facial attributes of the subsequent facial image, and identify a corresponding stored facial image and thereby the subsequent facial image as a particular individual, at least one item of personal identification relating to whom is stored in the database.

Preferably, the value combiner is employed to combine the face model and the collection of values corresponding to the subsequent facial image and thereby to identify the particular individual. Preferably, the at least one item of personal identification of the person is obtained from pre-registration data.

Preferably, the system also includes a social network interface for making available information from a social network to the computerized subsystem. Preferably, the facial image/person identification acquisition subsystem is operative for acquiring at least one facial image and at least one item of personal identification of a person other than a person interacting with the subsystem. Additionally or alternatively, the facial image/person identification acquisition subsystem is operative for acquiring at least one facial image of an otherwise unidentified person other than a person interacting with the subsystem.

Preferably, the system is embodied in a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.

There is further provided in accordance with yet another preferred embodiment of the present invention a system for recognizing repeated presence of persons in a place including a facial image/person identification acquisition subsystem acquiring at least one facial image of a person, and a computerized subsystem receiving the at least one facial image, the computerized subsystem including face model generation functionality operative to generate a face model corresponding to the at least one facial image, and image-to-attributes mapping functionality operative to assign values represented by adjectives to a plurality of facial attributes of the facial image, and a database which stores information and the values of facial attributes for a plurality of the persons.

Preferably, the computerized subsystem also includes attributes-to-image mapping functionality operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image associated with a particular individual, utilizing the face model. Preferably, the computerized subsystem also includes a value combiner is operative to combine the face model and the collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values.

Preferably, the system also includes a subsequent facial image acquisition subsystem acquiring at least one facial image and supplying it to the computerized subsystem, and the computerized subsystem is preferably operative to create a face model corresponding to the subsequent facial image, assign values represented by adjectives to a plurality of facial attributes of the subsequent facial image, and identify a corresponding stored facial image and thereby the subsequent facial image as being that of a particular individual, for recognizing repeated presence of that particular person.

Preferably, the value combiner is employed to combine the face model and the collection of values corresponding to the subsequent facial image thereby to recognize repeated presence of a person. Preferably, the system also includes a repeat presence statistics generator employing the face models and the collections of values for generate attribute-wise statistics regarding persons repeatedly present at a place. Preferably, the system also includes a social network interface for making available information from a social network to the computerized subsystem.

Preferably, the facial image/person identification acquisition subsystem is operative for acquiring at least one facial image and at least one item of personal identification of a person other than a person interacting with the subsystem. Additionally or alternatively, the facial image/person identification acquisition subsystem is operative for acquiring at least one facial image of an otherwise unidentified person other than a person interacting with the subsystem.

Preferably, the system is embodied in a computerized facial image attribute-wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives, and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.

There is yet further provided in accordance with yet still another preferred embodiment of the present invention a method for generating a computerized facial image attribute-wise evaluator, capable of assigning values, each represented by an adjective, to plural ones of discrete facial attributes of a facial image, the method including gathering a multiplicity of facial images, each having at least one facial image attribute, characterized by an adjective, associated therewith, and generating a function operative to receive a facial image to be evaluated and to utilize results of the gathering for assigning values to plural ones of discrete facial attributes of the facial image to be evaluated, the values being represented by adjectives. Preferably, the gathering includes collecting a multiplicity of facial images, each having at least one facial image attribute, characterized by an adjective, associated therewith from publicly available sources, and employing crowdsourcing to enhance correspondence between adjectives and facial attributes appearing in the multiplicity of facial images. Preferably, the crowdsourcing includes employing multiple persons who view ones of the multiplicity of facial images and the adjectives and indicate their views as to the degree of correspondence between the adjectives and the facial attributes in the ones of the multiplicity of images. Preferably, the values are numerical values.

There is also provided in accordance with another preferred embodiment of the present invention a system for recognizing user reaction to at least one stimulus including a computerized facial image attribute-wise evaluator, assigning values representing a facial image obtained at a time corresponding to user reaction to a stimulus to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives, and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.

Preferably, the system also includes a computerized attribute comparator comparing the plural ones of the discrete facial attributes prior to and following application of the at least one stimulus.

There is further provided in accordance with yet another preferred embodiment of the present invention a method for recognizing user reaction to at least one stimulus including assigning values representing a facial image obtained at a time corresponding to user reaction to a stimulus to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives, and classifying the facial image in accordance with the plural ones of the discrete facial attributes.

Preferably, the method also includes comparing the plural ones of the discrete facial attributes prior to and following application of the at least one stimulus.

There is further provided in accordance with yet another preferred embodiment of the present invention a computerized system for classifying persons including a relationship coefficient generator which generates relationship coefficients representing the probability of a person to be in a particular context at a particular time, and a computerized classifier which classifies the person in accordance with the plural ones of the relationship coefficients.

Preferably, the context is one of a geographic location and an event. Preferably, the relationship coefficients include a value and a decay function. Preferably, the decay function is a linear function. Alternatively, the decay function is an exponential function.

Preferably, the context is one of a hierarchy of hierarchical contexts. Preferably, relationship coefficients of contexts of a hierarchy of contexts are interdependent. Preferably, the relationship coefficient generator is operative in a case where multiple persons have been together in at least a first context to generate interdependent relationship coefficients between the multiple persons in a second context.

Preferably, the system also includes a computerized classifier which classifies facial images in accordance with plural ones of discrete facial attributes.

BRIEF DESCRIPTION OF THE DRAWINGS

The present invention will be understood and appreciated more fully from the following detailed description, taken in conjunction with the drawings in which:

Figs. 1A, IB and 1C are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with a preferred embodiment of the present invention;

Figs. 2A and 2B are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with another preferred embodiment of the present invention;

Figs. 3A and 3B are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with yet another preferred embodiment of the present invention;

Figs. 4A, 4B and 4C are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with yet another preferred embodiment of the present invention;

Figs. 5A and 5B are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with yet another preferred embodiment of the present invention;

Fig. 6 is a simplified illustration of a user satisfaction monitoring system employing image-to-text association in accordance with yet another preferred embodiment of the present invention;

Fig. 7 is a simplified illustration of an image/text/image database generation methodology useful in building a database employed in the systems of Figs. 1A - 6;

Fig. 8 is a simplified flow chart illustrating a training process for associating adjectives with images;

Fig. 9 is a simplified flow chart illustrating the process of training a visual classifier; Fig. 10 is a simplified flow chart illustrating a process for retrieving adjectives associated with an image;

Fig. 11 is a simplified flow chart illustrating a process for retrieving images associated with one or more adjectives; and

Fig. 12 is a simplified flow chart illustrating a process for retrieving facial images similar to a first image.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

Reference is now made to Figs. 1A, IB and 1C, which are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with a preferred embodiment of the present invention. The system of Figs. 1A - 1C preferably includes a computerized facial image attribute- wise evaluator, assigning values representing a facial image to plural ones of discrete facial attributes of the facial image, the values being represented by adjectives, and a computerized classifier which classifies the facial image in accordance with the plural ones of the discrete facial attributes.

As seen in Fig. 1A, on January 1, Mr. Jones, a customer of the AAA Department Store, enters the store and registers as a valued customer of the store at a registration stand 100. The registration stand preferably includes a computer 102 connected to a store computer network, and a digital camera 104 connected to computer 102. The valued customer registration process includes entering personal identification details of the customer, such as his full name, and capturing a facial image 108 of the customer by digital camera 104. Alternatively, personal identification details of the customer may be retrieved, for example, from a pre-existing personal social network account of the customer. Alternatively, the customer may register as a valued location over the internet from a remote location.

The personal identification details and facial image 108 are transmitted to a computerized person identification system 110 which preferably includes face model generation functionality 112, image-to-attributes mapping functionality 1 14, attributes-to-image mapping functionality 116 and a value combiner 1 17. Computerized person identification system 1 10 also preferably includes a valued customer database 1 18 which stores registration details and values of facial attributes of all registered customers. It is appreciated that database 1 18 may be any suitable computerized information store.

Face model generation functionality 112 is operative to generate a face model 120 which corresponds to facial image 108. It is appreciated that face model generation functionality 1 12 may employ any suitable method of face model generation known in the art. As seen in Fig. 1A, face model 120 generated by face model generation functionality 1 12 and corresponding to facial image 108 is stored in database 118 as one of the attributes of Mr. Jones.

In accordance with a preferred embodiment of the present invention, image-to-attributes mapping functionality 1 14 is operative to assign values represented by adjectives 122 to a plurality of facial attributes of facial image 108. The adjectives 122 representing the facial attributes may include, for example, adjectives describing hair color, nose shape, skin color, face shape, type and presence of or absence of facial hair. As seen in Fig. 1A, adjectives generated by attributes mapping functionality 1 14 which correspond to facial image 108 are stored in database 118 as values of attributes of Mr. Jones.

Further in accordance with a preferred embodiment of the present invention, attributes-to-image mapping functionality 1 16 is operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image, and thereby to identify a particular individual.

Yet further in accordance with a preferred embodiment of the present invention, value combiner 117 preferably is operative to combine a face model and a collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

Turning now to Fig. IB, it is seen that on a later date, such as on January 17, a customer enters the AAA Department Store and a digital camera 150, mounted at the entrance to the store, captures a facial image 152 of the customer. Facial image 152 is transmitted to computerized person identification system 110 where a face model 160 corresponding to facial image 152 is preferably generated by face model generation functionality 112. Additionally, values 162 represented by adjectives are preferably assigned to a plurality of facial attributes of facial image 152 by image-to-attributes mapping functionality 1 14.

As shown in Fig. IB, face model 160 and adjectives 162 are preferably combined by value combiner 1 17 into a combined collection of values, which is compared to the collections of values stored in database 1 18, and are found to match the face model and adjectives assigned to Mr. Jones, thereby identifying the person portrayed in facial image 152 captured by camera 150 as being Mr. Jones. It is appreciated that the collection of values combined by value combiner 117 and which are compared to the collections of values stored in database 1 18 may be any subset of face model 160 and adjectives 162.

Turning now to Fig. 1C, it is shown that for example, upon identifying the customer who has entered the store as Mr. Jones, who is a registered valued customer, the manager is notified by system 110 that a valued customer has entered the store, and the manager therefore approaches Mr. Jones to offer him a new product at a discount.

Reference is now made to Figs. 2A and 2B, which are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with another preferred embodiment of the present invention. As seen in Fig. 2 A, on a particular day such as January 1, a customer of the AAA Department Store enters the store and a digital camera 200 mounted at the entrance to the store captures a facial image 202 of the customer. Facial image 202 is transmitted to a computerized person identification system 210 which preferably, includes face model generation functionality 212, image-to-attributes mapping functionality 214, attributes- to-image mapping functionality 216 and a value combiner 217. Computerized person identification system 210 also preferably includes a customer database 218, which preferably stores values of facial attributes of all customers who have ever entered the store, and a visit counter 219 which preferably tracks the number of accumulated visits that each particular customer has made to the store. It is appreciated that database 218 may be any suitable computerized information store.

Face model generation functionality 212 is operative to generate a face model 220, which corresponds to facial image 202. It is appreciated that face model generation functionality 212 may employ any suitable method of face model generation known in the art. As seen in Fig. 2A, face model 220 generated by face model generation functionality 212 and corresponding to facial image 202 is stored in database 218 as one of the attributes of the customer of facial image 202.

In accordance with a preferred embodiment of the present invention, image-to-attributes mapping functionality 214 is operative to assign values represented by adjectives 222 to a plurality of facial attributes of facial image 202. The adjectives 222 representing the facial attributes may include, for example, adjectives describing age group, gender, ethnicity, face shape, mood and general appearance.

Further in accordance with a preferred embodiment of the present invention, attributes-to-image mapping functionality 216 is operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image, and thereby to identify a particular individual. It is appreciated that the collection of values may also include non-physical characteristics of the customer's appearance such as clothing type and color which may be used to identify an individual within a short period of time in a case where current values of facial attributes are not available.

Yet further in accordance with a preferred embodiment of the present invention, value combiner 217 preferably is operative to combine a face model and a collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

As seen in Fig. 2A, face model 220 and adjectives 222 are preferably combined by value combiner 217 into a combined collection of values, which is compared to the collections of values stored in database 218, and are found to match the face model and adjectives corresponding to a returning customer. Therefore, the visit counter 219 of the customer is incremented. It is appreciated that the collection of values combined by value combiner 217 and which are compared to the collections of values stored in database 218 may be any subset of face model 220 and adjectives 222.

Alternatively, if the combined collection of values generated by value combiner 217 is not found to match any of the collections of values stored in database 218, the combined collection of values generated by value combiner 217 and facial image 202 are preferably stored in database 218 as representing a new customer, and the counter 219 of the new customer is initialized to 1.

Turning now to Fig. 2B, it is shown that at closing time, such as at 5:00 PM on January 1, the manager of the store preferably receives a first report 230 from system 210 which includes a segmentation of customers who have entered the store over the course of the January 1. The segmentation may be according to any of the adjectives stored in database 218, such as gender, age group, ethnicity and mood. Report 230 also preferably includes information regarding the number of previous visits that were made to the store by the customers of January 1.

Additionally, the manager of the store may also receive a second report 234 from system 210 which includes a segmentation of returning customers who have entered the store over the course of the January 1. The segmentation may be according to any of the adjectives stored in database 218, such as gender, age group, ethnicity and mood. It is appreciated that reports 230 and 234 may be useful, for example, for planning targeted marketing campaigns, or for evaluating the success of previously executed marketing campaigns.

Reference is now made to Figs. 3A and 3B, which are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with yet another preferred embodiment of the present invention. As seen in Fig. 3 A, on a particular day such as January 1 , a customer of the AAA Department Store enters the store and browses merchandise in the store's toy department. A digital camera 250 mounted in the toy department captures a facial image 252 of the customer. As shown in Fig. 3A, additional digital cameras are preferably mounted throughout the various departments of the store.

Facial image 252 is transmitted to a computerized person identification system 260 which includes face model generation functionality 262, image-to-attributes mapping functionality 264, attributes-to-image mapping functionality 266 and a value combiner 267. Computerized person identification system 260 also preferably includes a customer database 268, which preferably stores values of facial attributes of all customers who have entered the store during the day, and information indicating which of the store's departments each customer visited. It is appreciated that database 268 may be any suitable computerized information store.

Face model generation functionality 262 is operative to generate a face model 270, which corresponds to facial image 252. It is appreciated that face model generation functionality 262 may employ any suitable method of face model generation known in the art. As seen in Fig. 3 A, face model 270 generated by face model generation functionality 262 and corresponding to facial image 252 is stored in database 268 as one of the attributes of the customer of facial image 252. In accordance with a preferred embodiment of the present invention, image-to-attributes mapping functionality 264 is operative to assign values represented by adjectives 272 to a plurality of facial attributes of facial image 252. The adjectives 272 representing the facial attributes may include, for example, adjectives describing age group, gender, ethnicity, face shape, mood and general appearance. As seen in Fig. 3 A, adjectives generated by attributes mapping functionality 264 which correspond to facial image 252 are stored in database .268 as values of attributes of the customer of facial image 252.

Further in accordance with a preferred embodiment of the present invention, attributes-to-image mapping functionality 266 is operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image, and thereby to identify a particular individual. It is appreciated that the collection of values may also include non-physical characteristics of the customer's appearance such as clothing type and color which may be used to identify an individual within a short period of time in a case where current values of facial attributes are not available.

Yet further in accordance with a preferred embodiment of the present invention, value combiner 267 preferably is operative to combine a face model and a collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

Additionally, system 260 records the department which the customer has visited in database 268 as being the toys department.

Turning now to Fig. 3B, it is shown that at closing time, such as at 5:00 PM on January 1 , the manager of the store preferably receives a report 280 from system 260 which includes a segmentation of customers who have entered the store's toy department over the course of the January 1. The segmentation may be according to any of the adjectives stored in database 268, such as gender, age group, ethnicity and mood. It is appreciated that report 280 may be useful, for example, for planning targeted marketing campaigns, or for evaluating the success of previously executed marketing campaigns.

Reference is now made to Figs. 4A, 4B and 4C, which are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with yet another preferred embodiment of the present invention. As shown in Fig. 4A, on January 1, a potential attendee registers to attend the florists' annual conference, preferably via a computer 300. As part of the registration process, the potential attendee is preferably prompted to enter personal identification details, such as his full name, and to upload at least one facial image 302 of himself. Alternatively, the potential attendee may choose to import personal identification details and one or more facial images, for example, from a pre-existing personal social network account.

The personal identification details and facial image 302 are transmitted to a computerized conference registration system 310 which preferably includes face model generation functionality 312, image-to-attributes mapping functionality 314, attributes-to-image mapping functionality 316 and a value combiner 317. Computerized conference registration system 310 also preferably includes a database 318 which stores registration details and values of facial attributes of all registered attendees. It is appreciated that database 318 may be any suitable computerized information store.

Face model generation functionality 312 is operative to generate a face model 320, which corresponds to facial image 302. It is appreciated that face model generation functionality 312 may employ any suitable method of face model generation known in the art. As seen in Fig. 4A, face model 320 generated by face model generation functionality 312 and corresponding to facial image 302 is stored in database 318 as one of the attributes of potential attendee Mr. Jones.

In accordance with a preferred embodiment of the present invention, image-to-attributes mapping functionality 314 is operative to assign values represented by adjectives 322 to a plurality of facial attributes of facial image 308. The adjectives representing the facial attributes may include, for example, adjectives describing hair color, nose shape, skin color, face shape, type and presence of or absence of facial hair. As seen in Fig. 4A, adjectives generated by attributes mapping functionality 314, which correspond to facial image 302 are stored in database 318 as values of attributes of potential attendee Mr. Jones.

Further in accordance with a preferred embodiment of the present invention, attributes-to-image mapping functionality 316 is operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image, and thereby to identify a particular individual.

Yet further in accordance with a preferred embodiment of the present invention, value combiner 317 preferably is operative to combine a face model and a collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

Turning now to Fig. 4B, it is seen that on a later date, such as on January 17, an attendee enters the florists' annual conference and approaches a registration booth 330 on the conference floor. Registration booth 330 includes a digital camera 332 which captures a facial image 334 of the attendee. Facial image 334 is transmitted computerized conference registration system 310 where a face model 340 corresponding to facial image 334 is preferably generated by face model generation functionality 312. Additionally, values 342, represented by adjectives, are preferably assigned to a plurality of facial attributes of facial image 334 by image-to-attributes mapping functionality 314.

As shown in Fig. 4B, face model 340 and values 342 are preferably combined by value combiner 317 into a combined collection of values, which is compared to the collections of values stored in database 318, and are found to match the face model and values assigned to Mr. Jones, thereby identifying the person portrayed in facial image 334 captured by camera 332 as being Mr. Jones. It is appreciated that the collection of values combined by value combiner 317 and which are compared to the collections of values stored in database 318 may be any subset of face model 340 and adjectives 342. Upon being identified as Mr. Jones, the attendee's registration is completed and the attendee is welcomed by the conference staff.

Turning now to Fig. 4C, it is shown that while attending the conference, attendees who wish to be introduced to other attendees, allow other attendees to capture a facial image 350 of them, using, for example, a digital camera embedded in a mobile communicator device 352. Mobile communicator devices 352 of conference attendees are granted access to computerized conference registration system 310 via a computer network. It is appreciated that the computer network may be, for example, a local computer network or the internet. Additionally or alternatively, an attendee may access computerized conference registration system 310 to register new, currently unregistered attendees to the conference, by capturing a facial image of the new attendee and transmitting the facial image, preferably together with associated personal identification information, to registration system 310.

Upon capturing image 350 of a conference attendee, mobile communicator device 352 transmits image 350 over the computer network to computerized conference registration system 310 where a face model 360 corresponding to facial image 350 is preferably generated by face model generation functionality 312. Additionally, values 362 represented by adjectives are preferably assigned to a plurality of facial attributes of facial image 350 by image-to-attributes mapping functionality 314.

As shown in Fig. 4C, face model 360 and values 362 are combined by value combiner 317 into a combined collection of values, which is compared to the collections of values stored in database 318, and are found to match the face model and values assigned to Mr. Jones, thereby identifying the person portrayed in facial image 350 captured by mobile communicator device 352 as being Mr. Jones. It is appreciated that the collection of values combined by value combiner 317 and which are compared to the collections of values stored in database 318 may be any subset of face model 360 and adjectives 362. Notification of the identification of the attendee portrayed in image 350 as Mr. Jones is transmitted by computerized conference registration system 310 back to mobile communicator device 352, which notification enables the operator of mobile communicator device 352 to know that he is approaching Mr. Jones.

Reference is now made to Figs. 5A and 5B, which are simplified illustrations of an identification system employing image-to-text and text-to-image association in accordance with yet another preferred embodiment of the present invention. In the embodiment of Figs. 5A and 5B, a relationship coefficient which measures the relationship between a person and a context is employed. The context may be, for example, a geographic location or an event, and the relationship coefficient comprises a value and a predefined decay function. A single person may have a relationship coefficient with multiple contexts simultaneously. The relationship coefficient can be used, for example, to predict the probability of a person being at a given location at a particular time.

The decay function may be any mathematical function. For example, the decay function for a geographical location may be a linear function, representing the tendency of a person to gradually and linearly distance himself from the location over time. The decay function for a one-time event may be, for example, an exponential decay function.

While a person is within a particular context, the current value of the generated relationship coefficient between the person and the context is set to be high. Each time the person is repeatedly sighted within the context, the value of the relationship coefficient is increased, potentially in an exponential manner.

It is appreciated that contexts may be hierarchical. For example, a geographic location may be within a larger geographical area such as a city or a country. Therefore, a person who has a relationship coefficient with a particular geographic location will also have a lower relationship coefficient with all other geographical locations hierarchical thereto, which decreases as a function of the distance between the particular geographic location and the related hierarchical geographic locations.

It is also appreciated that relationship coefficient of different people may be at least partially interdependent. For example, a first person who has been sighted together with a second person at multiple locations at multiple times will be assigned a relatively high relationship coefficient to a new location where the second person has been sighted.

As seen in Fig. 5 A, on a particular day such as January 1, 201 1, a diner dines at Cafe Jaques which is close proximity to the Eiffel Tower in Paris, France. A friend of the diner captures a facial image 400 of the diner using a digital camera which is part of a handheld mobile device 402 and registers the sighting of the diner by transmitting facial image 400 together with an associated time and location over the internet to a computerized person identification system 410. The location may be provided, for example, by a GPS module provided with device 402. Alternatively, the location may be retrieved, for example, from a social network. Using the associated time and location, a relationship coefficient which relates the diner to the location is generated as described hereinabove. Computerized person identification system 410 includes face model generation functionality 412, image-to-attributes mapping functionality 414, attributes- to-image mapping functionality 416 and a value combiner 417. Computerized person identification system 410 also preferably includes a sightings database 418 which preferably stores values of facial attributes of all persons who have been sighted and registered, together with an associated time and location. It is appreciated that database 418 may be any suitable computerized information store.

Face model generation functionality 412 is operative to generate a face model 420, which corresponds to facial image 400. It is appreciated that face model generation functionality 422 may employ any suitable method of face model generation known in the art. As seen in Fig. 5A, face model 420 generated by face model generation functionality 412 and corresponding to facial image 400 is stored in database 418 as one of the attributes of the individual of facial image 400.

In accordance with a preferred embodiment of the present invention, image-to-attributes mapping functionality 414 is operative to assign values represented by adjectives 422 to a plurality of facial attributes of facial image 400. The adjectives 422 representing the facial attributes may include, for example, adjectives describing age group, gender, ethnicity, face shape, mood and general appearance. As seen in Fig. 5 A, adjectives generated by attributes mapping functionality 414 which correspond to facial image 400 are stored in database 418 as values of attributes of the individual of facial image 400. Additionally, the time and location associated with facial image 400 are also stored in database 418.

Further in accordance with a preferred embodiment of the present invention, attributes-to-image mapping functionality 416 is operative to utilize a collection of values of facial attributes to identify a corresponding stored facial image, and thereby to identify a particular individual. It is appreciated that the collection of values may also include non-physical characteristics of the customer's appearance such as clothing type and color which may be used to identify an individual within a short period of time in a case where current values of facial attributes are not available.

Yet further in accordance with a preferred embodiment of the present invention, value combiner 417 preferably is operative to combine a face model and a collection of values of facial attributes into a combined collection of values which can be matched to a corresponding stored collection of values, and thereby to identify a particular individual.

Turning now to Fig. 5B, it is shown that on a later date, such as on February 1, 2011, a diner dines at Cafe Jaques which is in close proximity to the Eiffel Tower in Paris, France. A bystander captures a facial image 450 of the diner using a digital camera which is part of a handheld mobile device 452 and registers the sighting of the diner by transmitting facial image 450 together with an associated time and location over the internet to a computerized person identification system 410 where a face model 460, corresponding to facial image 450, is preferably generated by face model generation functionality 412. Additionally, values 462 represented by adjectives are preferably assigned to a plurality of facial attributes of facial image 450 by image- to-attributes mapping functionality 414.

As shown in Fig. 5B, face model 460, values 462 and the time and location associated with facial image 450 are preferably combined by value combiner 417 into a combined collection of values, which is compared to the collections of values stored in database 418, and are found to match the combined values assigned to the diner who was last seen at the Eiffel Tower on January 1, 2011. It is appreciated that the collection of values combined by value combiner 417 and which are compared to the collections of values stored in database 418 may be any subset of face model 460 and adjectives 462. Notification of the identification of the diner portrayed in image 450 is transmitted over the internet by computerized person identification system 410 back to mobile communicator device 452.

It is a particular feature of this embodiment of the present invention that the relationship coefficient which relates the diner to the location may also be used as an attribute value which increases the reliability of the identification of the diner.

It is a particular feature of the present embodiment of the current invention that the combination of the values of the facial attributes associated with a facial image together with additional information such as a particular location frequented by an individual is operative to more effectively identify individuals at the particular location or at related locations, such as at other locations which are in close proximity to the particular location. It is another particular feature of the present embodiment of the current invention that identification of individuals according to the present embodiment of the current invention is not limited to precise identification of particular individuals based on personal identification information such as first and last name, but rather also includes identification of individuals according by facial attributes and aggregating behavioral information pertaining to the individuals.

Reference is now made to Fig. 6, which is a simplified illustration of a user satisfaction monitoring system employing image-to-text association in accordance with yet another preferred embodiment of the present invention. As shown in Fig. 6, a viewer uses a multimedia viewing device 480 to view computerized content 482. It is appreciated that device 480 may be, for example, a television device or a computer. Content 482 may be, for example, a video clip, a movie or an advertisement.

A digital camera 484 connected to multimedia viewing device 480 preferably captures a facial image 486 of the viewer at predefined intervals such as, for example, every few seconds, and preferably transmits images 486 over the internet to an online computerized content satisfaction monitoring system 490. Alternatively, images 486 may be monitored, stored and analyzed by suitable functionality embedded in device 480.

Preferably, system 490 includes image-to-attributes mapping functionality 492 and a viewer expressions database 494. It is appreciated that database 494 may be any suitable computerized information store.

In accordance with a preferred embodiment of the present invention, image-to-attributes mapping functionality 492 is operative to assign a value represented by an adjective 496 to the expression of the viewer as captured in facial images 486, and to store adjectives 496 in database 494. Adjectives 496 may include, for example, "happy", "sad", "angry", "content" and "indifferent". It is appreciated that adjectives 496 stored in database 494 may be useful, for example, for evaluating the effectiveness of content 482.

Reference is now made to Fig. 7, which is a simplified illustration of an image/text/image database generation methodology useful in building a database employed in the systems of Figs. 1A - 6. As shown in Fig. 7, a plurality of images 500 are collected from an image repository 502 which is publicly available on the internet, by a computerized person identification training system 510. Image repository 502 may be, for example, a publicly available social network or textual search engine which associates text with images appearing on the same page as the images or on one or more nearby pages. Preferably, one or more associated characteristics are provided by the image repository with each of images 500. The characteristics may include, for example, a name, age or age group, gender, general appearance and mood, and are generally subjective and are associated with the images by the individuals who have publicized the images or by individuals who have tagged the publicized images with comments which may include such characteristics.

Computerized person identification training system 510 first analyzes each of the characteristics associated with each of images 500 and translates each such suitable characteristic to an attribute value. For each such value, system 510 then sends each of images 500 and its associated attribute value to a crowdsourcing provider, such as Amazon Mechanical Turk, where a plurality of individuals voice their opinion as to the level of correspondence of each image with its associated attribute value. Upon receiving the crowdsourcing results for each image-attribute value pair, system 510 stores those attribute values which received a generally high correspondence level with their associated image in a database 520.

Reference is now made to Fig. 8, which is a simplified flow chart illustrating a training process for associating adjectives with images. As seen in Fig. 8, an adjective defining a facial attribute is chosen by the system from a list of adjectives to be trained, and one or more publicly available textual search engines are preferably employed to retrieve images which are associated with the adjective. Additionally, one or more publicly available textual search engines are preferably employed to retrieve images which are associated with one or more translations of the adjective in various languages. The list of adjectives may be compiled, for example, by collecting adjectives from a dictionary.

A visual face detector is employed to identify those retrieved images which include a facial image. Crowdsourcing is then preferably employed to ascertain, based on a majority vote, which of the facial images correspond to the adjective. The adjective and corresponding facial images are then used to train a visual classifier, as described hereinbelow with regard to Fig. 9. The visual classifier is then employed to associate the adjective with an additional set of facial images, and crowdsourcing is further employed to ascertain the level of correspondence of each of the additional set of facial images with the adjective, the results of which are used to further train the visual classifier. It is appreciated that additional cycles of crowdsourcing and training of the visual classifier may be employed to further refine the accuracy of the visual classifier, until a desired level of accuracy is reached. After the training of the visual classifier, the classifier is added to a bank of attribute functions which can later be used by the system to classify facial images by adjectives defining facial attributes.

Reference is now made to Fig. 9, which is a simplified flow chart illustrating the process of training a visual classifier. As shown in Fig. 9, for each adjective, the results of the crowdsourcing process described hereinabove with regard to Fig. 8 are employed to generate two collections of images. A first, "positive" collection includes images which have been ascertained to correspond to the adjective, and a second, "negative" collection includes images which have not been ascertained to correspond to the adjective.

The images of both the positive and the negative collection are then normalized to compensate for varying 2- dimensional and 3 -dimensional alignment and differing illumination, thereby transforming each of the images into a canonical image. The canonical images are then converted into canonical numerical vectors, and a classifier is learned from a training set comprising of pairs of positive and negative numerical vectors using a supervised-classifier, such as a Support Vector Machine (SVM).

Reference is now made to Fig. 10, which is a simplified flow chart illustrating a process for retrieving adjectives associated with an image. As shown in Fig. 10, the image is first analyzed to detect and crop a facial image which is a part of the image. The facial image is then converted to a canonical numerical vector by normalizing the image to compensate for varying 2-dimensional and 3 -dimensional pose-alignment and differing illumination. The bank of attribute functions described hereinabove with regard to Fig. 8 is then applied to the numerical vector, and the value returned from each attribute function is recorded in a numerical vector which represents the adjectives associated with the facial image. Reference is now made to Fig. 1 1, which is a simplified flow chart illustrating a process for retrieving images from a pre-indexed database of images, which are associated with one or more adjectives. As shown in Fig. 1 1, a textual query for images having adjectives associated therewith is first composed. Using Natural Language Processing (NLP), adjectives are extracted from the textual query. The system then retrieves images from a previously processed database of facial images which are best-matched to the adjectives extracted from the query, preferably by using Latent Dirichlet Allocation (LDA). The retrieved facial images are ordered by the level of correlation of their associated numerical vectors to the adjectives extracted from the query, and the resulting ordered facial images are provided as output of the system.

Reference is now made to Fig. 12, which is a simplified flow chart illustrating a process for retrieving facial images which are similar to a first image. As shown in Fig. 12, the first image is first analyzed to detect and crop a facial image which is a part of the image. The facial image is then converted to a canonical numerical vector by normalizing the image to compensate for varying 2-dimensional and 3- dimensional pose-alignment and differing illumination. The bank of attribute functions described hereinabove with regard to Fig. 8 is then applied to the numerical vector, and the value returned from each attribute function is recorded in a numerical vector which represents the adjectives associated with the facial image.

A previously indexed database comprising numerical vectors of images, such as a KD tree, is searched using a similarity-function, such as Euclidian distance, to find a collection of numerical vectors which represent images which closely match the numerical vector of the first image.

It will be appreciated by persons skilled in the art that the present invention is not limited by what has been particularly shown and described hereinabove. Rather the scope of the present invention includes both combinations and subcombinations of the various features described hereinabove as well as modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not in the prior art.