Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MEDIA OBJECT SELECTION
Document Type and Number:
WIPO Patent Application WO/2015/022689
Kind Code:
A1
Abstract:
A computerized method selecting a group of media objects. The method comprises analyzing a plurality of profiling media objects, each one of the plurality of visual media objects associated with a target user, identifying, using a processor, a prevalence of each of a plurality of characterizing properties in each one of the plurality of visual media objects, selecting at least one of the plurality of characterizing properties based on the respective prevalence, identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having the at least one characterizing property, and outputting an indication of the group.

Inventors:
ECKHOUSE BARZILAI ADI (IL)
Application Number:
PCT/IL2014/050727
Publication Date:
February 19, 2015
Filing Date:
August 13, 2014
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
PIC DJ LTD (IL)
International Classes:
G06F17/30
Foreign References:
US20130051670A12013-02-28
US20120014560A12012-01-19
US20130148864A12013-06-13
US20110029562A12011-02-03
Attorney, Agent or Firm:
G.E. EHRLICH (1995) LTD. et al. (04 Ramat Gan, IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computerized method selecting a group of media objects, comprising: providing a plurality of profiling media objects, each one of said plurality of profiling media objects associated with a target user;

processing, using a processor, said plurality of profiling media objects to identify a prevalence of each of a plurality of characterizing properties among said plurality of profiling media objects;

selecting at least some of said plurality of characterizing properties based on respective said prevalence;

identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of said group having at least one of said at least some of characterizing properties; and outputting an indication of said group.

2. The computerized method of claim 1, wherein said outputting comprises using said indication to manage the distribution of said plurality of profiling media objects among a plurality of storage locations.

3. The computerized method of claim 1, wherein said plurality of profiling media objects are published in at least one web document accessible via a network to a plurality of other users.

4. The computerized method of claim 3, wherein said providing comprises analyzing at least one web accessible document to identify automatically said plurality of profiling media objects.

5. The computerized method of claim 4, wherein said at least one web accessible document includes at least one social network profile webpage said target user.

6. The computerized method of claim 1, further comprising selecting by an operator at least one dataset storing said plurality of additional media objects.

7. The computerized method of claim 1, wherein said providing comprises analyzing a plurality of messages sent by said target user to identify automatically said plurality of profiling media objects.

8. The computerized method of claim 1, wherein said providing comprises analyzing text associated with each one of a plurality of probed media objects associated with said target user to identify positive or negative context and selecting said profiling media objects from said plurality of probed media objects accordingly.

9. The computerized method of claim 8, wherein said text is text extracted from at least one of a comment given by a user which is socially connected to said target user in at least one web accessible document and a text provided with a message send by said target user.

10. The computerized method of claim 1, wherein said providing comprises crawling a plurality of web accessible documents to extract automatically said plurality of profiling media objects.

11. The computerized method of claim 1, wherein at least some of said plurality of profiling media objects are visual media objects uploaded by said target user to a web accessible page.

12. The computerized method of claim 1, wherein at least some of said plurality of profiling media objects are media objects added to one of a plurality of messages sent by said target user.

13. The computerized method of claim 12, wherein said plurality of messages are selected from a group consisting of instant messaging messages, cellular messages, and electronic mail messages.

14. The computerized method of claim 1, wherein said plurality of profiling media objects includes a plurality of still images imaging said target user.

15. The computerized method of claim 1, wherein said plurality of profiling media object are images tagged with a like tag or shared in a social network by said target user.

16. The computerized method of claim 1, wherein said plurality of profiling media object are images tagged with a like tag or shared in a social network by friends of said target user in said social network.

17. The computerized method of claim 1, wherein at least some of said plurality of profiling media objects are profiling media objects are automatically selected based on the number of tags set with regard thereto by a plurality of users.

18. The computerized method of claim 1, wherein said at least some characterizing properties include a characterizing facial feature of a figure that appears in at least some of said plurality of profiling media objects.

19. The computerized method of claim 18, wherein said at least some characterizing properties are selected from a group consisting of a prominent depicted side of the face, an eye state, a depicted facial area of a figure, a depicted facial area of one figure in relation to another figure, a tilting angle of an imaged head of a figure, a curvature level of a month of a figure, a exposure level of an organ of a figure, and a symmetry level of an imaged face.

20. The computerized method of claim 1, wherein said at least some characterizing properties include a capturing location indicative of a location in which at least some of said plurality of profiling media objects have been taken.

21. The computerized method of claim 1, wherein said at least some characterizing properties include a capturing location indicative of a location selected from a geographic information based dataset defining a plurality of areas.

22. The computerized method of claim 1, wherein said at least some characterizing properties include a capturing time indicative of an event during which numerous profiling media objects are captured.

23. The computerized method of claim 1, wherein said at least some characterizing properties include a quality measure indicative of an automatically deduced quality of at least some of said plurality of profiling media objects.

24. The computerized method of claim 1, said at least some characterizing properties include a composition of at least one object and figure in at least some of said plurality of profiling media objects.

25. The computerized method of claim 1, said at least some characterizing properties include a capturing time indicative of a time during which at least some of said plurality of profiling media objects have been taken.

26. The computerized method of claim 1, wherein at least some of said plurality of profiling media objects are manually selected by a human operator.

27. A computer readable medium comprising computer executable instructions adapted to perform the method of claim 1.

28. A system of selecting a group of media objects, comprising: a processor;

a profiling module which processes, using said processor, a plurality of profiling media objects, each one of said plurality of profiling media objects associated with a target user, to identify and to select a plurality of characterizing properties, each of said plurality of characterizing properties is selected based on a prevalence thereof among said plurality of profiling media objects; a clustering module which identifies automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of said group having at least one of said plurality of characterizing properties; and a user interface module which outputs an indication of said group.

29. A computerized method selecting a group of group of media objects, comprising:

receiving a public profile of a target user that includes a plurality of profiling media objects, each one of said plurality of profiling media objects is associated with a target user and accessible via a network to a plurality of other users;

analyzing said public profile, using a processor, to identify a plurality of characterizing properties in each one of said plurality of profiling media objects;

identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of said group having at least one of said plurality of characterizing properties characterizing; and outputting an indication of said group.

30. A computerized method selecting a group of media objects, comprising: dividing a poll of a plurality of media objects to a plurality of groups wherein a first group of media objects from said poll comprises media objects which document at least one target user and a second group of media objects from said poll comprises media objects which do not document said at least one target user;

identifying at least one characterizing property which characterizes a property of media objects preferred by said at least one target user;

applying a first image processing analysis to select a first subgroup of said first group such that each member of said first subgroup having said at least one characterizing property;

applying a second image processing analysis to select a second subgroup of said second group based on at least one image quality parameter; and outputting an indication of said first group and said second group;

wherein the ratio between the number members in said first subgroup and the number members in said second subgroup complies with a desired ratio.

31. The computerized method of claim 30, further comprising managing storage of said poll among a plurality of storage locations based on said indication.

32. The computerized method of claim 30, further comprising selecting said poll such that each one of said plurality of media objects is captured in a common geographical area or during a period associated with an event.

33. The computerized method of claim 30, wherein said desired ratio is the ratio between the number members in said first group and the number members in said second group.

34. The computerized method of claim 30, wherein said outputting comprises automatically generating an album which includes members of said first and second subgroups.

Description:
MEDIA OBJECT SELECTION

BACKGROUND

The present invention, in some embodiments thereof, relates to media object selection and, more specifically, but not exclusively, to methods and systems of selecting group of media objects having common subject matter out of one or more datasets.

During the last years, since the advent of digital photography and in particular after the thriving social networks, there are a flood of images and videos which are uploaded to network accessible documents and are uploaded to user computers and data storage, such as webpages and shared folders. People generally only look at a small fraction of the images and/or videos that their contacts in the social network publish and people do not access their data as a result of the amount of existing data. In an effort to address the problem, some people manually create albums with selected media objects such as images and/or video files and/or summarize their videos when sharing videos, and share only a small portion of the images.

SUMMARY

According to some embodiments of the present invention, there is a computerized method selecting a group of media objects. The computerized method comprises providing a plurality of profiling media objects, each one of the plurality of profiling media objects associated with a target user, processing, using a processor, the plurality of profiling media objects to identify a prevalence of each of a plurality of characterizing properties among the plurality of profiling media objects, selecting at least some of the plurality of characterizing properties based on respective the prevalence, identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having at least one of the at least some of characterizing properties, and outputting an indication of the group.

Optionally, the plurality of profiling media objects are published in at least one web document accessible via a network to a plurality of other users. More optionally, the providing comprises analyzing at least one web accessible document to identify automatically the plurality of profiling media objects.

More optionally, wherein the at least one web accessible document includes at least one social network profile webpage the target user.

Optionally, the computerized method further comprises selecting by an operator at least one dataset storing the plurality of additional media objects.

Optionally, the providing comprises analyzing a plurality of messages sent by the target user to identify automatically the plurality of profiling media objects.

Optionally, the providing comprises analyzing text associated with each one of a plurality of probed media objects associated with the target user to identify positive or negative context and selecting the profiling media objects from the plurality of probed media objects accordingly.

More optionally, the text is text extracted from at least one of a comment given by a user which is socially connected to the target user in at least one web accessible document and a text provided with a message send by the target user.

Optionally, the providing comprises crawling a plurality of web accessible documents to extract automatically the plurality of profiling media objects.

Optionally, at least some of the plurality of profiling media objects are visual media objects uploaded by the target user to a web accessible page.

Optionally, at least some of the plurality of profiling media objects are media objects added to one of a plurality of messages sent by the target user.

More optionally, the plurality of messages are selected from a group consisting of instant messaging messages, cellular messages, and electronic mail messages.

Optionally, the plurality of profiling media objects includes a plurality of still images imaging the target user.

Optionally, the plurality of tagged with a like tag by the target user.

Optionally, at least some of the plurality of profiling media objects are profiling media objects are automatically selected based on the number of tags set with regard thereto by a plurality of users.

Optionally, the at least some characterizing properties include a characterizing facial feature of a figure that appears in at least some of the plurality of profiling media objects. More optionally, the at least some characterizing properties are selected from a group consisting of a prominent depicted side of the face, a depicted facial area of a figure, a depicted facial area of one figure in relation to another figure, a tilting angle of an imaged head of a figure, a curvature level of a month of a figure, a exposure level of an organ of a figure, and a symmetry level of an imaged face.

Optionally, the at least some characterizing properties include a capturing location indicative of a location in which at least some of the plurality of profiling media objects have been taken.

Optionally, the at least some characterizing properties include a capturing location indicative of a location selected from a geographic information based dataset defining a plurality of areas.

Optionally, the at least some characterizing properties include a capturing time indicative of an event during which numerous profiling media objects are captured.

Optionally, the at least some characterizing properties include a quality measure indicative of an automatically deduced quality of at least some of the plurality of profiling media objects.

Optionally, the at least some characterizing properties include a composition of at least one object and figure in at least some of the plurality of profiling media objects.

Optionally, the at least some characterizing properties include a capturing time indicative of a time during which at least some of the plurality of profiling media objects have been taken.

Optionally, at least some of the plurality of profiling media objects are manually selected by a human operator.

According to some embodiments of the present invention, there is a system of selecting a group of media objects. The system comprises a processor, a profiling module which processes, using the processor, a plurality of profiling media objects, each one of the plurality of profiling media objects associated with a target user, to identify and to select a plurality of characterizing properties, each of the plurality of characterizing properties is selected based on a prevalence thereof among the plurality of profiling media objects, a clustering module which identifies automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having at least one of the plurality of characterizing properties, and a user interface module which outputs an indication of the group.

According to some embodiments of the present invention, there is a computerized method selecting a group of group of media objects. The method comprises receiving a public profile of a target user that includes a plurality of profiling media objects, each one of the plurality of profiling media objects is associated with a target user and accessible via a network to a plurality of other users, analyzing the public profile, using a processor, to identify a plurality of characterizing properties in each one of the plurality of profiling media objects, identifying automatically a group of a plurality of matching media objects from a plurality of additional media objects, each member of the group having at least one of the plurality of characterizing properties characterizing, and outputting an indication of the group.

According torn some embodiments of the present invention, there is provided a computerized method selecting a group of media objects. The method comprises dividing a poll of a plurality of media objects to a plurality of groups wherein a first group of media objects from the poll comprises media objects which document at least one target user and a second group of media objects from the poll comprises media objects which do not document the at least one target user, identifying at least one characterizing property which characterizes a property of media objects preferred by the at least one target user, applying a first image processing analysis to select a first subgroup of the first group such that each member of the first subgroup having the at least one characterizing property, applying a second image processing analysis to select a second subgroup of the second group based on at least one image quality parameter, and outputting an indication of the first group and the second group. The ratio between the number members in the first subgroup and the number members in the second subgroup complies with a desired ratio.

Optionally, the computerized method further comprises managing storage of the poll among a plurality of storage locations based on the indication.

Optionally, the computerized method further comprises selecting the poll such that each one of the plurality of media objects is captured in a common geographical area or during a period associated with an event. Optionally, the desired ratio is the ratio between the number members in the first group and the number members in the second group.

Optionally, the outputting comprises automatically generating an album which includes members of the first and second subgroups.

Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.

In the drawings:

FIG. 1 is a schematic illustration of an exemplary system for selection of visual media objects based on identification of characterizing properties of profiling media objects, according to some embodiments of the present invention; and

FIG. 2 is a flowchart of a method of identifying a media objects from a user selected datasets based on one or more characterizing properties, according to some embodiments of the present invention; and

FIG. 3 is a method for automatically computing instructions to select a group of image from a poll of images, according to some embodiments of the present invention.

DETAILED DESCRIPTION

The present invention, in some embodiments thereof, relates to media object selection and, more specifically, but not exclusively, to methods and systems of selecting group of media objects having common subject matter out of one or more datasets.

According to some embodiments of the present invention, there are provided methods and systems of selecting a set of media objects, such as images and video files, from media object datasets based on characterizing properties of profiling media objects, such as media objects which are tagged by a target user and/or with a tag indicative of the target user set either manually or automatically for instance images tagged and identified via facial recognition. In such methods and systems, knowledge based automatic selection of media objects may be made based on the public profile of a target user and/or manual selection of media objects, for instance photos selected by a user (e.g. uploaded and/or marked for access).

Optionally, the characterizing properties found in a set of user associated images and/or video files with relatively high prevalence. The set of user associated images and/or video files may be extracted from webpages which are associated with the target user, for example a social network profile, messages sent by a target user, items stored in a folder of a messaging service or application, a user media object folder and/or the like. The set of user associated images and/or video files may be selected by a user using a user interface, for example from folders stored in the user personal computer and/or Smartphone and/or from designated folders which are available online.

The characterizing properties, may be features of figures which appear in the images and/or video files, for instance facial features, image quality parameters, composition, capturing time, capturing location, and/or the like. The set of user associated images and/or video files may be selected based on analysis of related text and/or tags provided by socially connected peers. For instance, the textual content of related comments or messages and/or the number of social tags which have been provided to probed image in taken into account while selecting the set of user associated images and/or video files. In another instance, the characterizing properties are facial features which are extracted from the faces imaged in the images and/or video files, for instance, a face size ratio, a prominent side of the face of a figure, for example a left side or a right side. And/or the like.

Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.

As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a "circuit," "module" or "system." Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.

Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD- ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.

A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.

Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.

Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).

Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.

The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.

Reference is now made to FIG. 1, which is a schematic illustration of an exemplary system 100 for selection of visual media objects, such as images or image sequences, for instance video files, from one or more datasets based on identification of characterizing properties in profiling media objects, for instance media objects posted by a user to a social network page, according to some embodiments of the present invention.

The exemplary system 100 may be implemented by one or more network nodes, for example one or more processor based servers 101, connected to a network 106, which host image processing modules 111, 112, for profiling module 111 and clustering module 112, as described below. The modules 111, 112 are executed using one or more processors 113, for example central processing units, microprocessor, distributed computing and/or the like. The network is optionally a communication network such as the Internet and/or an Ethernet.

The system 100 includes one or more user interface modules 103A/B which generate and manage user interface(s) that allow users to input selections, for instance a selection of profiling media objects and/or references to datasets 104 of visual media objects and to be present matched visual media objects, for example as described below. As shown at 103A, the interface modules may be managed at the network connected server(s) 101, set to instruct a browser host in a client terminal 105, such as a laptop, a desktop, a Smartphone, Smart glasses, such as Google Glass™, and a tablet to present a graphical user interface (GUI) that allows presenting visual media objects to a user and/or receiving user selections therefrom. The user GUI may be a widget, a Flash™ component, extensible markup language (XML), and/or any hypertext markup language (HTML) component. The interface module may be part of an existing web service or site, automatically or manually providing an image and/or video selection service. An interface module 103B may be locally installed in the client terminal 105, for instance as an application downloaded from an application store, an application, a browser addon and/or a plug-in.

Reference is now also made to FIG. 2, which is a flowchart 200 of a method of identifying a group of media objects from a user selected datasets based on one or more characterizing properties, according to some embodiments of the present invention. The method is optionally implemented using the system 100 depicted in FIG. 1 for automatic selection of media objects for each one each of a plurality of target users, for example subscribers of a service and/or social network peers.

First, as shown at 201, media objects which are associated with a target user are provided, for example automatically identified or manually received. These media objects, referred to herein as profiling media objects, are optionally media object automatically found as part of the public profile of the user and/or media object manually selected by the user, for instance using a designated user interface. As used herein, a public profile of a target user means a set of data, including media data that is available to users who are not the target user by browsing to network accessible documents, such as webpages. These users may receive access based on social connection to the target user, for instance friendship in a social network, by membership in an organization, by receiving messages and/or without any connection to the target user, for example by using a search engine and a browser. Optionally, one or more web accessible documents and/or user generated messages which are related to a target user are analyzed, for instance by the profiling module 111, to identify the profiling media objects. As used herein, a web accessible document means a webpage, such as a social network profile page or wall, a media file, for instance a clip or any video file, a feed, a word document, a portable document format (PDF) document, a presentation, a media posting service, such as Instagram™. A user generated message means an email, an SMS, an instance messaging (IM) message, and/or the like. A visual media object, for brevity referred to herein as media object, includes an image, a video file, a selected background, and/or the like. The profiling media objects are optionally images and/or video clips tagged in a social network, such as Facebook™, by the target user and/or his socially connected friends. Tagging of a visual media object is indicative that the target user appears in the visual media object, found to be related to the visual media object by a social network user, for example indicating that he is related to a scene imaged in the visual media object, and/or marked the visual media object with a Like tag. Social networks tagging, for instance name tagging or Like tagging, are user controlled tags that may be removed by the tagged user, for instance the target user, even if he did not add the tag. Therefore, such tagging is usually indicative that the tagged user has a positive appreciation of the tagged image as he at least did not remove the tag. When the profiling media objects are images and/or video files (e.g. clips) tagged by the user with a like tag, the tag is indicative that the target user wants other people to know that the visual media object has a positive or substantial effect. Optionally, media objects are selected based on the number of Like tags it received from in a social network, the number of positive comments it received from in a social network, the number of sharing actions of the image in the social network, and/or the number of users how shared the image. Positive comments may be identified using textual analysis, for instance sentiment analysis as described below. The above allows maintaining a direct relationship between the scope of social network appreciation and positive reaction and the weight given to the image or video.

Additionally or alternatively, the profiling media objects are visual media objects published by the user, for example uploaded to a public space, for instance uploaded to a social network album and/or wall.

Additionally or alternatively, the profiling media objects are images and/or video clips sent or forwarded by the target user to a contact via a messaging service such as iMessage™, Whatsapp™, and Line™. Additionally or alternatively, the profiling media objects are visual media objects sent or forwarded to a contact via a messaging service such as an email manager and/or a short message service (SMS) and/or multimedia messaging service (MMS) editor.

Optionally, comments related to one or more of the images are analyzed, for example filtered based on textual analysis, identifying positive or negative context of the image. Optionally, the analyzed comments are of the target user, the comments may be comments of an image tagged in a social network, textual content in a message sent with the message, and/or text that appears with the image after identifying a copy of the image in a third party document, for example using image matching algorithms. Optionally, the textual analysis includes sentiment analysis, for instance as described in Turney (2002). "Thumbs Up or Thumbs Down? Semantic Orientation Applied to Unsupervised Classification of Reviews". Proceedings of the Association for Computational Linguistics, pp. 417-424 and Benjamin Snyder; Regina Barzilay (2007). "Multiple Aspect Ranking using the Good Grief Algorithm". Proceedings of the Joint Human Language Technology/North American Chapter of the ACL Conference (HLT-NAACL). pp. 300-307.

The one or more web accessible documents may be identified using web crawler(s) which search for publically available content pertaining to the target user. The one or more web accessible documents may be manually designated by a user. The one or more web accessible documents may be identified based on predefined settings, for example a profile of the target user in social network websites.

The selection of one or more web accessible documents which includes profiling media objects associated with the user removes from a human operator, for instance the user, the need to invest time and effort in selecting the profiling media objects. The fact that these visual media objects have tags or other web associations to the user, for example appear in one of his public folders or pages, tagged by the user, tagged by the users who are socially connected to the user, selected as profile visual media objects and/or the like is indicative that the user positively evaluates these visual media objects.

Alternatively, the profiling media objects are manually selected by the target user any other user, for example marked and/or uploaded using a user interface. For example, the target user or any other user may grant the system with access to a selection of media objects stores in his client terminal and/or in a third party storage.

Now, as shown at 202, each one of the profiling media objects is processed to identify one or more characterizing properties. Optionally, each image is processed to determine a compliance with a plurality of characterizing property rules and/or filters.

The output of the analysis of each image is a list per image of one or more characterizing properties which are found and/or not found in the image. For example a matrix documenting compliance of all images is generated. This allows identifying which characterizing properties prevail in the user associated images and which characterizing properties do not.

Optionally, one or more of the characterizing properties are visual. Optionally, a characterizing property is a facial property that prevails among figures imaged in the media objects. For example, an area depicting a face of an imaged human, for example the target user, in relation to the total size of the image, in relation to faces of other humans in the image. The face size ratio may be calculated by a simple pixel count calculation. In another example, the facial property is a prominent depicted side of the face of a figure, for instance the side of a face imaged in the media object, for example a left side or a right side. The preferred side may be identified by image processing, for example identification of the location of an imaged eye in relation to another imaged facial feature, for example the location and/or size of an imaged ear, the angle of the nose and/or the like. In another example, the facial property is the tilting angle of an imaged head, for example left, right, up, and/or down tilt. In another example, the facial property is a fixed facial expression, for example lips facial expression such as a smile, pursed lips, lip biting, and/or covered mouth.

Optionally, the facial property is a curvature level of a month, for instance a curvature of a smile and/or a turn of the lips (e.g. downturn or upturn); see for example Yu-Hao Huan, Face Detection and Smile Detection, Dept. of Computer Science and Information Engineering, National Taiwan University. Such a facial property may be identified by smile detection algorithms. Optionally, the facial property is an exposure and/or luck of exposure of an organ of figures, for instance the teeth, the nose, the ears and/or the like. The facial property may be the teeth brightness and/or whiteness level. The teeth brightness and/or whiteness level may be identified by respective pixel value analysis. Optionally, the facial property is symmetry level of an imaged face. The symmetry may be determined using various facial symmetry level detection processes, see for example Fred Stentiford, Attention Based Facial Symmetry Detection, UCL Adastral Park Campus, Martlesham Heath, Ipswich, UK. In another example, the facial property is the eye state (e.g. opening level, opening curve) of the eyes of the imaged face. The eye states may be obtained from the eye features such as the inner corner of eye, the outer corner of the eye, the iris, and the eyelid. The eye state may be detected as described Qiong Wang, Eye Location and Eye State Detection in Facial Images with Unconstrained Background, School of Computer, Nanjing University of Science & Technology, Nanjing 210094, China, which is incorporated herein by reference.

According to some embodiments of the present invention, the characterizing property is a locational property that prevails in the media objects, for example a location stamp found in the metadata of an image. In such embodiments, the locational property may be indicative of an image taken in certain scenery, bar, restaurant, house and/or the like. The locational property is optionally a locational stamp, such as a global positioning system (GPS) stamp that is found in the metadata of the image.

According to some embodiments of the present invention, the characterizing properties include capturing time that prevails in the media objects. This allows identifying a set of media objects captured at the same event and/or area, an indication of the importance of the event and the documentation thereof to the user. Additionally or alternatively, the capturing time is matched with a list of dates that includes general dates, such as holidays and/or weekends, and personal dates, such as birthdays, wedding days, friends or family events and/or the like. Personal dates may be extracted from a calendar and/or deduced from a social network page associated with the target user. Friends or family events may be deduced from social network pages of peers who are socially connected to the target user. Additionally or alternatively, the capturing location is matched with a dataset of location, for example geographic information based dataset indicative of touristic or historical locations. This allows identifying a set of media objects captured at leisure locations, an indication of the importance and uniqueness of the image and the documentation thereof to the user. The locations may be identified by matching geographical coordinates.

According to some embodiments of the present invention, the characterizing properties include one or more quality automatically deduced measures that prevail in the media objects, for example Sharpness, Noise, dynamic range (or exposure range), tone reproduction, image brightness, Contrast, also known as gamma, color accuracy, distortion, Vignetting (light falloff), Exposure accuracy, lateral chromatic aberration (LCA), lens flare, Color moire, and software induced artifacts. Other quality measures may be defocus blur level, motion blur level, off-angle level, occlusion level, specular reflection, lighting and pixel count. Other quality measures may be bivariate measures such as average difference structural content, normalized cross correlation, correlation quality, maximum difference, image fidelity, weighted distance, Laplacian mean square error, peak mean square error, normalized absolute error, and normalized mean square error. Each quality measure may be evaluated by a designated filter or algorithm.

In such embodiments, quality measures which prevail in the visual media objects from the web accessible documents and/or user generated messages are used to select more images, based on the assumption that these quality measures had a positive effect on the user who selected to place these images or selected not to remove these images.

According to some embodiments of the present invention, the characterizing properties include one or more compositions which that prevails among figures imaged in the media objects The composition may be determined by analyzing each image in accordance with one or more predefined composition rules. The analysis may include identifying regions of compositional significance within the image and applying the composition rules to those regions, identifying whether an image comply with the composition rule(s) or not. The compositional rules may include well known photographic composition rules such as the rule of thirds, the golden ratio, and/or the diagonal method. Composition detection algorithms may be used.

According to some embodiments of the present invention, the characterizing properties include the identity of one or more people imaged in the analyzed images. For example, the figures imaged in the analyzed image identified and their identity is used as a characterizing property. The identity may be of the target user, the target user close family, the target user remote family and/or images which are socially connected to the target user. The identity of imaged people may be determined using facial recognition techniques and based on a map that associates between users, for example a social network connection map and/or a family network, for instance myHeritage family tree and/or the like. This allows identifying more images of these people.

According to some embodiments of the present invention, the characterizing properties include the number of people which are imaged in the user preferred media objects, for example how many people are imaged in each image. The number of people may be determined by face detection algorithm(s). It should be noted that in the above examples, extraction of characterizing properties from images is described. Similar methods may be used to extract the characterizing properties from video files, for example by analysis of frame(s) and/or using video processing methods. Similar methods may be used to extract the characterizing properties from image mosaics and other forms of visual media objects.

As shown at 203, characterizing properties having a prevalence rate of more a predefined threshold among the profiling media objects are selected. The predefined threshold may be 20%, 30%, 40%, 50%, 60%, 70%, 80%, 90%, or any intermediate or larger prevalence rate.

As shown at 204, one or more datasets of visual media objects are selected by the operator, for example the user associated with the profiling media objects. The datasets may be of visual media objects stored locally on one or more client terminals of one or more users, visual media objects remotely stored on a web connected database, and/or visual media objects which are published by users which are socially connected to the operator. The dataset may be an aggregation from a plurality of sources, for example from folders or devices of a plurality of users. Optionally, the datasets includes images taken during a user defined period, for example images taken during the last week, month and/or year. Optionally, the datasets includes images taken during an event held in a certain time and/or location. For example, image taken in a class or in school during a year of education (i.e. fifth grade) are selected as a dataset. In another example, images taken during a vacation, a wedding, a tour or a house's event are selected. In another example, images taken during a weekly course are selected, for example images taken during a class held between 16:00 and 18:00 in a certain area of the University or school.

As shown at 205, the visual media objects from one or more datasets are analyzed, for instance by the clustering module, to identify automatically a set of one or more matching media objects. The set of matching media objects may include 5, 10, 100, 1000 or any intermediate or larger amount of media objects. The amount may be determined manually by an operator and/or automatically, for example by a requirement from an album generation application. The size may be determined based on a template that is filled with the matching media objects, for instance a webpage or a widget that is set to present dynamic or static information about the target user. The dataset(s) may include 10, 100, 1000, 10,000, 100,000, 1,000,000 or any intermediate or larger amount of media objects. Each matching media object has one or more of the selected characterizing properties. Optionally, the one or more matching media objects are the matching media objects having more selected characterizing properties than other media objects. In such embodiments, a media object may be scored, for instance based on the number of characterizing properties which appear in the media object and/or type of characterizing properties where different characterizing properties receive a different weight.

Optionally, the matching media objects may be filtered or otherwise selected by taken into account one or more image selection property, for example, type, capturing location, capturing time, and/or the like. For example, image selection property may be any of the above characterizing properties. For example, the image selection property is a locational property, for example a location stamp found in the metadata of an image. In such embodiments, the locational property may be indicative of an image taken in certain scenery, bar, restaurant, house and/or the like. The locational property is optionally a locational stamp, such as a GPS stamp that is found in the metadata of the image. For example, matching media objects are filtered so that more media objects are selected from touristic, leisure or historical locations. The locations may be identified by matching geographical coordinates. This allows selecting images taken in placed have been acknowledged as visually interesting by many users.

Now, as shown at 206, an indication of the matching media object(s) is outputted, for example by automatically presenting some or all of the matching media objects on the client terminal of the operator, uploading some or all of the matching media objects to a social network page and/or a public folder, printing some or all of the matching media objects, creating a sequence of images, for instance a video file that includes some or all of the matching media objects, and/or creating a mosaic of the matching media objects.

Optionally, a media object album is automatically created and/or updated based on the process depicted in FIG. 2. In such embodiments, the media object album may be uploaded to a social network page, posted in a blog, added to a memory of a mobile device, for instance a Smartphone, forwarded by an electronic message such as an email and/or the like. Optionally, as shown at 207, the process is performed iteratively, for example every day, week, month, and/or any intermediate time, allowing updating the matching media object(s). Optionally, the updating is performed by replacing some or all of the matching media object(s) with new the matching media object(s), refreshing the images which are presented or uploaded based on exposure time, matching rank and/or the like.

According to some embodiments of the present invention, the process depicted in FIG. 2 is used to create an album for summarizing a period, an event, and/or to create a visual documentation of the target user. In such embodiments, a user interface may guide the user to create the album and optionally to send it to print.

Optionally, the matching media object(s) are published in a folder and/or set to be presented in a presentation upon request. In such embodiments, the above methods and systems are used as a summarization tool, allowing an operator to save time and effort in creating user document album.

According to some embodiments of the present invention, the method for selecting a group of media objects depicted in FIG. 2 is used for selecting a group of media objects, such as images and video clips which document the target user while images of landscape or other people are selected by other image selection methods. In such a manner, the characterizing properties of media objects wherein the target user is imaged are characterizing properties which are found in media objects liked by the target user or found with relatively high prevalence in images selected by the target user for publication, for example in posted by the target user to social network page(s) or sent or forwarded by the target user to contact(s) via a messaging service.

For example, reference is now made to FIG. 3 which depicts a flowchart of a method 300 for automatically computing instructions to select a group of image from a poll of images, according to some embodiments of the present invention. The method may be a method for automatically computing instructions to generate an album from a poll of images or for selecting media objects for a certain memory space. First, as shown at 301, a poll of images is designated, for example selected by a target user or any other operator. For example, the poll may be images in a selected folder, images taken in a certain period, images posted social network(s) and/or the like. For example, a set of images captured in a certain period (or repetitive time slots such as a course) and/or location (or a geographic area) is identified, for example images captured during a vacation, a course, an event and/or the like. The poll may be provided similarly to above described process with reference to 204.

Optionally, as shown at 302, images are classified to two classes, for example images with faces and images without faces. Alternatively, in another example, images are classified to two classes, images documenting the target user(s) therein and images which do not document the target user(s).

As shown at 303, images wherein one or more target user(s) are imaged (e.g. all facial images or images having the faces of the target user(s) therein) are processed to facilitate a selection of images which are preferred and/or liked by the target user(s), for example as described above with reference to 205. For example, facial media objects are processed to identify and to select media objects with characterizing properties which have been liked by the target user or media objects with characterizing properties found with relatively high prevalence in media objects selected by the target user for publication. In such a manner, only images and video clips wherein the target user is in a subjectively attractive pose, angle, and/or facial exposure are selected while others images remain unselected.

As shown at 304, images wherein one or more target user(s) are not imaged may be classified based on one or more image quality parameters. The image quality parameters may be outcomes of applying one or more generic image quality filters where classification is determined with reference to a quality threshold. For example, images may be selected based on composition analysis such as the rule of thirds analysis, the golden ratio analysis, and/or the diagonal method analysis. Additionally or alternatively images may be selected based on image quality estimation, for example Sharpness estimation, Noise estimation, dynamic range (or exposure range) estimation, tone reproduction estimation, image brightness level, Contrast level, also known as gamma, color accuracy, distortion presence or absence, Vignetting (light falloff), Exposure accuracy, lateral chromatic aberration (LCA), lens flare presence or absence, Color moire presence or absence, and/or any artifact analysis.

Optionally, as shown at 305, images from two groups are selected to maintain a predefined ratio, optionally user defined ratio. For brevity, media objects selected from a group may be referred to as a sub group. Using first and second as word preamble is used to differentiate between different subgroups, groups, and/or image processing analysis methods. In such embodiments, image selection process may be repeated until a desired suitable ratio of images is achieved. Optionally, the suitable ratio is automatically deduced from an analysis of the albums uploaded by the target user(s). Optionally, the suitable ratio is automatically deduced from an analysis of the social web pages of the target user(s). Optionally, the suitable ratio is automatically deduced from the poll such that the ration in the poll is maintained in the generated album. Optionally, the suitable ratio is manually defined by the target user(s). Optionally, a suitable ratio of images to video files is also maintained based on the same principles.

Now, as shown at 306, a collection of image is selected, optionally with the predefined ratio. The selected images may be images which are set to be stored in a certain location and/or a basis for an album uploaded to a webpage or account and/or made available via a social web page.

The above process may be continuously or iteratively performed to update the group of selected images.

According to some embodiments of the present invention, the methods described above with reference to FIG. 2 and FIG. 3 are used to manage the storage of media objects in the memory of a client terminal, for example in the memory of a Smartphone, a tablet, a camera and/or the like. In such embodiments, while the poll may be automatically uploaded to a cloud storage, the selected images remain in the storage of the client terminal and/or otherwise made available for immediate usage.

The methods as described above are used in the fabrication of integrated circuit chips.

The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

It is expected that during the life of a patent maturing from this application many relevant methods and systems will be developed and the scope of the term a unit, a network, and a module is intended to include all such new technologies a priori.

As used herein the term "about" refers to ± 10 %.

The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of" and "consisting essentially of".

The phrase "consisting essentially of" means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.

As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include aplurality of compounds, including mixtures thereof.

The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". Any particular embodiment of the invention may include a plurality of "optional" features unless such features conflict.

Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.

It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.

Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims. All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.