Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR CREATION OF MULTI-MEDIA CONTENT OBJECTS
Document Type and Number:
WIPO Patent Application WO/2017/163238
Kind Code:
A1
Abstract:
There is provided a method for dynamically creating a multi-media content object, comprising: gathering user session data indicative of user behavior using a webpage and/or application, analyzing the user session data to extract a context-information that includes at least a user selection, determining a multi-media content template according to the context-information, transmitting instructions to a client terminal to present a graphical user interface (GUI) of an interactive tool for creating a personalized multi-media content object based on the determined multi-media content template, receiving at least one user provided content item using the interactive tool according to instructions defined by the determined multi-media content template, automatically creating a multi-media content object by processing the at least one user provided content item according to the determined multi-media content template; and outputting the multi-media content object for presentation.

Inventors:
SEGEV DORON (IL)
ATAD EFRAIM (IL)
AFEK TOMER (IL)
BIRNBOIM MICHAEL (IL)
WAXMAN YARON (IL)
Application Number:
PCT/IL2017/050347
Publication Date:
September 28, 2017
Filing Date:
March 20, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SHOWBOX LTD (IL)
International Classes:
G06Q30/02; G06F17/30; H04N21/80
Domestic Patent References:
WO2013063270A12013-05-02
Foreign References:
US20140365887A12014-12-11
US20130294746A12013-11-07
Attorney, Agent or Firm:
EHRLICH, Gal et al. (IL)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A computer- implemented method for dynamically creating an interactive tool for creating multi-media content object based on a determined multi-media content template, the method performed by a server in network communication with at least one client terminal, the method comprising:

gathering user session data from at least one of a webpage and an application being accessed by a certain client terminal, wherein the user session data is at least indicative of user behavior using the at least one of webpage and application;

analyzing the user session data to extract a context-information, wherein the context-information includes at least a user selection made when accessing the at least one of webpage and application;

determining a multi-media content template according to the context- information;

transmitting instructions to the certain client terminal to present on a display associated with the certain client terminal, a graphical user interface (GUI) of an interactive tool for creating a personalized multi-media content object based on the determined multi-media content template;

receiving at least one user provided content item from the certain client terminal using the interactive tool according to instructions defined by the determined multimedia content template;

automatically creating a multi-media content object by processing the at least one user provided content item according to the determined multi-media content template; and

outputting the multi-media content object for presentation.

2. The computer- implemented method of claim 1, wherein the gathered data is based on content of at least one of the web page and the application.

3. The computer- implemented method of claim 1, wherein the gathered data includes at least one of: manually entered user data provided to the at least one of web page and application, and multi-media content items uploaded by the user to the at least one of web page and application.

4. The computer-implemented method of claim 2, further comprising automatically linking the multi-media content item to the at least one of web page and application to allow other users accessing the at least one of web page and application to view the multi-media content object on respective displays of client terminals used by the other users.

5. The computer-implemented method of claim 1, wherein the context- information is a member selected from the group consisting of: a testimonial, a property for rent or sale, a product for sale, a documentary, a television series, a service offering, geographical location, and dating.

6. The computer-implemented method of claim 1, wherein the context- information is extracted from data manually entered by a user using the interactive tool in response to presented requests for the data.

7. The computer-implemented method of claim 1, wherein the user session data includes at least one of: a historical webpage accessed previously to the currently accessed webpage, and a historical application accessed previously to the currently accessed application.

8. The computer-implemented method of claim 1, further comprising rendering a data structure storing a representation of the multi-media content object, the data structure including instructions for rendering the multi-media content object from the at least one user provided content item according to the determined multi-media content template.

9. The computer-implemented method of claim 8, wherein automatically creating the multi-media content object comprises compiling the data structure by rendering instruction code into the multi-media content object for at least one of: storage, transmission over a network, and presentation on a display.

10. The computer-implemented method of claim 9, wherein the compilation comprises: parsing the data structure to identify instructions, collecting the at least one user provided content item according to the instructions, rendering a plurality of multimedia fragments according to the instruction, and assembling the rendered multi-media fragments into the multi-media content object.

11. The computer-implemented method of claim 1, further comprising presenting instructions within the interactive tool for providing the at least one user provided content item, wherein the instructions are stored in association with at least one of the determined multi-media content template and application business logic.

12. The computer- implemented method of claim 1, wherein automatically creating the multi-media content object comprises automatically formatting and editing the at least one user provided content item according to instruction code stored in association with the determined multi-media content template.

13. The computer-implemented method of claim 1, wherein the determined multi-media content template includes at least one of: overall style, animation, show opener, show closure, title style, layout of the at least one user provided content item, sound track, sound effects, instructions to change voice recording provided by the user, overlay widgets, transition style, special effects, overall script, and background.

14. The computer-implemented method of claim 1, wherein the user session data includes one or more members selected from the group consisting of: social graph of the user, application usage history of the user, web sites visited by the user, and geographical location of the user.

15. The computer-implemented method of claim 1, wherein the at least one user provided content item includes one or more members selected from the group consisting of: manually entered text, images captured by a webcam or mobile phone camera, videos captured by a webcam or mobile phone camera, a link to content items stored on a web server, and a manual selection within a GUI.

16. The computer-implemented method of claim 1, wherein determining a multi-media content template comprises determining a plurality of template fragments and automatically assembling the template fragments to create the multi-media content template.

17. The computer-implemented method of claim 16, wherein the template fragments are selected from the group consisting of: instructions for rendering of at least one user provided content item into a portion of the multi-media content object, canned templates defining how a plurality of user provided content items are composed together into at least a portion of the multi-media content object, transitional templates defining transitions between rendering of the multi-media content object based on other rendered template fragments, and motion templates including instructions defining rendering of motion objects within rendered template fragments of the multi-media content object.

18. The computer- implemented method of claim 17, wherein at least one of the template fragments is implemented as code that when executed by a processor performs the instructions of the template fragments and the definition of the templates fragments.

19. The computer-implemented method of claim 16, wherein the determined multi-media content template includes instructions for applying formatting and editing based on a common theme to the plurality of template fragments assembled into the multi-media content object.

20. The computer-implemented method of claim 1, wherein different multimedia content objects created by different users based on the same or similar determined multi-media content template are based on a common attributes including at least one member selected from the group consisting of: design, structure, composition, look, and sound.

21. A system for dynamically creating an interactive tool for creating multimedia content objects based on a determined multi-media content template, comprising: a computing unit comprising:

a program store storing code; and a processor coupled to the program store for implementing the stored code, the code comprising:

code to gather user session data from at least one of a webpage and an application being accessed by a client terminal, wherein the user session data is at least indicative of user behavior using the at least one of webpage and application, analyze the user session data to extract a context-information, and determine a multi-media content template according to the context-information;

code to present on a display associated with the certain client terminal, a graphical user interface (GUI) of an interactive tool for creating a personalized multimedia content object based on the determined multi-media content template, and to receive at least one user provided content item from the client terminal using the interactive tool according to instructions defined by the determined multi-media content template;

code to automatically create the multi-media content object by processing the at least one user provided content item according to the determined multi-media content template; and

code to output the multi-media content object for presentation.

22. The system of claim 21, further comprising a server network interface that provides communication with a web server hosting the web site, and wherein the created multi-media content object is automatically linked to the web site for presentation on respective displays of other users using other client terminals.

23. A computer program product comprising a non-transitory computer readable storage medium storing program code thereon for dynamically creating an interactive tool for creating multi-media content object based on a determined multimedia content template, for implementation by a processor in network communication with at least one of client terminal, the program code comprising instructions to:

gather user session data from at least one of a webpage and an application being accessed by a certain client terminal, wherein the user session data is at least indicative of user behavior using the at least one of webpage and application;

analyze the user session data to extract a context-information;

determine a multi-media content template according to the context-information; transmit instructions to the certain client terminal to present on a display associated with the certain client terminal, a graphical user interface (GUI) of an interactive tool for creating a personalized multi-media content object based on the determined multi-media content template;

receive at least one user provided content item from the certain client terminal using the interactive tool according to instructions defined by the determined multimedia content template;

automatically create the multi-media content object by processing the at least one user provided content item according to the determined multi-media content template; and

output the multi-media content object for presentation.

Description:
SYSTEMS AND METHODS FOR CREATION OF MULTI-MEDIA

CONTENT OBJECTS

BACKGROUND

The present invention, in some embodiments thereof, relates to multi-media content objects and, more specifically, but not exclusively, to a systems and methods for automatic creation of multi-media content objects.

Users post individual content items on web sites for access by other users. For example, users may post images and videos on a social network site for viewing by other users accessing the content items on the web site. Software packages allow users to manually edit the content items, apply special effects (e.g., change the colors, apply special filters), and assemble the content items into a sequence (e.g., slide show). The software package may output a new file that may be stored on the web site and accessed by other users.

SUMMARY

According to an aspect of some embodiments of the present invention there is provided a computer-implemented method for dynamically creating an interactive tool for creating multi-media content object based on a determined multi-media content template, the method performed by a server in network communication with at least one client terminal, the method comprising: gathering user session data from at least one of a webpage and an application being accessed by a certain client terminal, wherein the user session data is at least indicative of user behavior using the at least one of webpage and application; analyzing the user session data to extract a context-information, wherein the context-information includes at least a user selection made when accessing the at least one of webpage and application; determining a multi-media content template according to the context-information; transmitting instructions to the certain client terminal to present on a display associated with the certain client terminal, a graphical user interface (GUI) of an interactive tool for creating a personalized multi-media content object based on the determined multi-media content template; receiving at least one user provided content item from the certain client terminal using the interactive tool according to instructions defined by the determined multi-media content template; automatically creating a multi-media content object by processing the at least one user provided content item according to the determined multi-media content template; and outputting the multi-media content object for presentation.

Optionally, the gathered data is based on content of at least one of the web page and the application.

Optionally, the gathered data includes at least one of: manually entered user data provided to the at least one of web page and application, and multi-media content items uploaded by the user to the at least one of web page and application.

Optionally, the method further comprises automatically linking the multi-media content item to the at least one of web page and application to allow other users accessing the at least one of web page and application to view the multi-media content object on respective displays of client terminals used by the other users.

Optionally, the context-information is a member selected from the group consisting of: a testimonial, a property for rent or sale, a product for sale, a documentary, a television series, a service offering, geographical location, and dating.

Optionally, the context-information is extracted from data manually entered by a user using the interactive tool in response to presented requests for the data.

Optionally, the user session data includes at least one of: a historical webpage accessed previously to the currently accessed webpage, and a historical application accessed previously to the currently accessed application.

Optionally, the method further comprises rendering a data structure storing a representation of the multi-media content object, the data structure including instructions for rendering the multi-media content object from the at least one user provided content item according to the determined multi-media content template. Optionally, automatically creating the multi-media content object comprises compiling the data structure by rendering instruction code into the multi-media content object for at least one of: storage, transmission over a network, and presentation on a display.

Optionally, the compilation comprises: parsing the data structure to identify instructions, collecting the at least one user provided content item according to the instructions, rendering a plurality of multi-media fragments according to the instruction, and assembling the rendered multi-media fragments into the multi-media content object.

Optionally, the method further comprises presenting instructions within the interactive tool for providing the at least one user provided content item, wherein the instructions are stored in association with at least one of the determined multi-media content template and application business logic.

Optionally, automatically creating the multi-media content object comprises automatically formatting and editing the at least one user provided content item according to instruction code stored in association with the determined multi-media content template.

Optionally, the determined multi-media content template includes at least one of: overall style, animation, show opener, show closure, title style, layout of the at least one user provided content item, sound track, sound effects, instructions to change voice recording provided by the user, overlay widgets, transition style, special effects, overall script, and background.

Optionally, the user session data includes one or more members selected from the group consisting of: social graph of the user, application usage history of the user, web sites visited by the user, and geographical location of the user.

Optionally, the at least one user provided content item includes one or more members selected from the group consisting of: manually entered text, images captured by a webcam or mobile phone camera, videos captured by a webcam or mobile phone camera, a link to content items stored on a web server, and a manual selection within a GUI.

Optionally, determining a multi-media content template comprises determining a plurality of template fragments and automatically assembling the template fragments to create the multi-media content template. Optionally, the template fragments are selected from the group consisting of: instructions for rendering of at least one user provided content item into a portion of the multi-media content object, canned templates defining how a plurality of user provided content items are composed together into at least a portion of the multi-media content object, transitional templates defining transitions between rendering of the multi-media content object based on other rendered template fragments, and motion templates including instructions defining rendering of motion objects within rendered template fragments of the multi-media content object.

Optionally, at least one of the template fragments is implemented as code that when executed by a processor performs the instructions of the template fragments and the definition of the templates fragments. Optionally, the determined multi-media content template includes instructions for applying formatting and editing based on a common theme to the plurality of template fragments assembled into the multi-media content object.

Optionally, different multi-media content objects created by different users based on the same or similar determined multi-media content template are based on a common attributes including at least one member selected from the group consisting of: design, structure, composition, look, and sound.

According to an aspect of some embodiments of the present invention there is provided a system for dynamically creating an interactive tool for creating multi-media content objects based on a determined multi-media content template, comprising: a computing unit comprising: a program store storing code; and a processor coupled to the program store for implementing the stored code, the code comprising: code to gather user session data from at least one of a webpage and an application being accessed by a client terminal, wherein the user session data is at least indicative of user behavior using the at least one of webpage and application, analyze the user session data to extract a context-information, and determine a multi-media content template according to the context-information; code to present on a display associated with the certain client terminal, a graphical user interface (GUI) of an interactive tool for creating a personalized multi-media content object based on the determined multi-media content template, and to receive at least one user provided content item from the client terminal using the interactive tool according to instructions defined by the determined multimedia content template; code to automatically create the multi-media content object by processing the at least one user provided content item according to the determined multi-media content template; and code to output the multi-media content object for presentation.

Optionally, the system further comprises a server network interface that provides communication with a web server hosting the web site, and wherein the created multimedia content object is automatically linked to the web site for presentation on respective displays of other users using other client terminals.

According to an aspect of some embodiments of the present invention there is provided a computer program product comprising a non-transitory computer readable storage medium storing program code thereon for dynamically creating an interactive tool for creating multi-media content object based on a determined multi-media content template, for implementation by a processor in network communication with at least one of client terminal, the program code comprising instructions to: gather user session data from at least one of a webpage and an application being accessed by a certain client terminal, wherein the user session data is at least indicative of user behavior using the at least one of webpage and application; analyze the user session data to extract a context- information; determine a multi-media content template according to the context- information; transmit instructions to the certain client terminal to present on a display associated with the certain client terminal, a graphical user interface (GUI) of an interactive tool for creating a personalized multi-media content object based on the determined multi-media content template; receive at least one user provided content item from the certain client terminal using the interactive tool according to instructions defined by the determined multi-media content template; automatically create the multimedia content object by processing the at least one user provided content item according to the determined multi-media content template; and output the multimedia content object for presentation.

Unless otherwise defined, all technical and/or scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the invention pertains. Although methods and materials similar or equivalent to those described herein can be used in the practice or testing of embodiments of the invention, exemplary methods and/or materials are described below. In case of conflict, the patent specification, including definitions, will control. In addition, the materials, methods, and examples are illustrative only and are not intended to be necessarily limiting.

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

Some embodiments of the invention are herein described, by way of example only, with reference to the accompanying drawings. With specific reference now to the drawings in detail, it is stressed that the particulars shown are by way of example and for purposes of illustrative discussion of embodiments of the invention. In this regard, the description taken with the drawings makes apparent to those skilled in the art how embodiments of the invention may be practiced.

In the drawings: FIG. 1 is a flowchart of a method for dynamically creating an interactive tool for creating multi-media content objects based on a determined multi-media template according to a context-information, in accordance with some embodiments of the present invention;

FIG. 2 is a block diagram of a system that provides a user with a dynamically created interactive tool to create multi-media content objects using a multi-media template selected according to a context-information, in accordance with some embodiments of the present invention;

FIG. 3 is a dataflow diagram conceptually depicting dataflow for dynamically creating an interactive tool for creating multi-media content object, in accordance with some embodiments of the invention;

FIGs. 4A-4D are exemplary screen shots of a GUI implementing the dynamically created interactive tool for creation of multi-media content objects using the multi-media template, in accordance with some embodiments of the present invention.

DETAILED DESCRIPTION

The present invention, in some embodiments thereof, relates to multi-media content objects and, more specifically, but not exclusively, to a systems and methods for automatic creation of multi-media content objects.

An aspect of some embodiments of the present invention relates to systems (e.g., code stored in a program store executed by a processor, optionally a server in communication with client terminal(s)) and/or methods (e.g., implemented by the processor) that dynamically create an interactive tool for creating multi-media content objects (e.g., videos) based on a determined multi-media content template.

The processor determines a multi-media content template according to a context-information obtained by an analysis of user session data gathered from a web page(s) visited by a client terminal and/or based on windows of applications accessed by the client terminal (which may be locally stored on the client terminal and/or remotely accessed from a server). The application may be, for example, a virtual-book reading application, a social-networking application, a camera (e.g., taking self-images of the user), and/or a geographic location application (e.g., traffic navigation application, map application) which may be based on the actual current geographic location of the user (e.g., obtained using a global positioning device). The user session data is indicative of behavior of the user, for example, the particular web pages navigated by the user, and/or selections made by the user, and/or geographic location (e.g., movement patterns) of the user. Data may further be data extracted from content of the web site and/or from the window of the application.

The context-information may be determined based on additional data, for example, data manually entered by the user, data collected from the web site by the processor related to the user (e.g., friends of the user, personal details of the user), and/or third party data (e.g., weather, day of the week, current events, geographic location). The context- information includes at least a selection made by the user when accessing the webpage and/or application, for example, when the user selects an option to list a property for rent, the context-information includes a property for rent. For example, when the user (that normally lives in the United States) takes a self-image at a museum in Paris, the context-information may include my vacation.

The context-information represents the purpose of the web page and/or web site and/or application, for example, selling a property, selling an item, sending personalized messages to others, and looking for a date. Content items (e.g., still images, videos, audio records, animations, and/or pictures) are supplied by the user, optionally according to the multi-media content template (using the interactive tool requesting the content items according to instructions defined by the determined template). For example, the interactive tool may display instructions to the user on what content item to provide, and/or how to acquire the content item according to the determined multimedia content template.

The processor creates the multi-media content object by automatically formatting and/or editing the content items according to instructions stored in association with the determined multi-media content template. The multi-media object may be stored on a network server for presentation to others users, for example, the multi-media object is stored on the website accessed by the user implementing the interactive tool that provided the context-information based on which the multi-media content template was determined. For example, a user may visit a website that lists real estate properties for sale or rent. The processor analyzes gathered data from the website and/or application window to determine that the context-information is listing a small office located downtown for rental. The processor determines the template based on the context-information, for example, by selecting the template from a set of available templates, and/or dynamically creating the template by assembling template fragments.

The multi-media content object, which may be a video advertising the office for rental of the user, is stored in association with the property listing of the user. Other users visiting the site may view the created multi-media content object. The systems and/or methods described herein allow users to easily and quickly create professional looking and customized multi-media content items using commonly available cameras and content items.

Optionally, the processor automatically renders a data structure (e.g., text file, script, code) that stores a representation of the multi-media content object. The data structure includes instructions (or may be interpreted by a compiler as instructions) to automatically create the multi-media content object by processing (e.g., editing, formatting, and/or adding additional content items) the user provided content items according to the determined multi-media content template. Rendering instruction code executed by the processor may create the multi-media content object by following the instructions stored in the data structure.

The processor may determine the multi-media content template based on multiple determined template fragments. The template fragments may be automatically assembled by the processor to create the multi-media content template.

The systems and/or methods (e.g., implemented by the processor executing code instructions) described herein provide a technical solution to the technical problem of improving the process of creating multi-media content objects from individual multimedia content items. The multi-media content template (determined according to context-information) provides instructions for customization (optionally automatically) of user provided multi-media content items to create professional looking multi-media content objects.

The systems and/or methods (e.g., implemented by the processor executing code instructions) described herein improve performance of computers, (e.g., client terminal, server(s)) and/or a network. The improvement may result from a server that stores code that automatically creates multi-media content objects (e.g., videos) based on user context-information and/or individual content items (e.g., still images, audio, videos, animations, and text). The server centrally processes and automatically creates the multi-media content objects (i.e., instead of the processing being locally performed at each client terminal), which reduces the processing and/or memory requirements by the client terminals.

The systems and/or methods (e.g., implemented by the processor executing code instructions) described herein create new data in the form of the multi-media content object and/or multi-media content template, which may be stored on a network connected server and linked to a web site (or other user accessible application) hosted on a web server (or other network connected server).

Accordingly, the systems and/or methods described herein are necessarily rooted in computer technology to overcome an actual technical problem arising in multi-media content object processing, and/or network communication.

Before explaining at least one embodiment of the invention in detail, it is to be understood that the invention is not necessarily limited in its application to the details of construction and the arrangement of the components and/or methods set forth in the following description and/or illustrated in the drawings and/or the Examples. The invention is capable of other embodiments or of being practiced or carried out in various ways.

The present invention may be a system, a method, and/or a computer program product. The computer program product may include a computer readable storage medium (or media) having computer readable program instructions thereon for causing a processor to carry out aspects of the present invention.

The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non- exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, a floppy disk, and any suitable combination of the foregoing. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.

Computer readable program instructions described herein can be downloaded to respective computing/processing devices from a computer readable storage medium or to an external computer or external storage device via a network, for example, the Internet, a local area network, a wide area network and/or a wireless network. The network may comprise copper transmission cables, optical transmission fibers, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. A network adapter card or network interface in each computing/processing device receives computer readable program instructions from the network and forwards the computer readable program instructions for storage in a computer readable storage medium within the respective computing/processing device.

Computer readable program instructions for carrying out operations of the present invention may be assembler instructions, instruction- set- architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C++ or the like, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present invention.

Aspects of the present invention are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer readable program instructions.

These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.

The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other device to cause a series of operational steps to be performed on the computer, other programmable apparatus or other device to produce a computer implemented process, such that the instructions which execute on the computer, other programmable apparatus, or other device implement the functions/acts specified in the flowchart and/or block diagram block or blocks.

The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.

As used herein, the term web site may sometimes be interchanged with the term application or a window of an application. For example, references made to a user using client terminal 208 to access website 228 hosted by web server 226 may be interchanged with user using client terminal 208 to access an application locally stored on client terminal 208 (e.g., stored by data repository 216) and/or accessing an application stored on another network connected server.

Reference is now made to FIG. 1, which is a flowchart of a method for dynamically creating an interactive tool for creating multi-media content objects (e.g., videos) based on a determined multi-media template according to a context-information, in accordance with some embodiments of the present invention. Reference is also made to FIG. 2, which is a block diagram of a system 200 that provides a user with a dynamically created interactive tool to create multi-media content objects based on a multi-media template selected according to a context-information, in accordance with some embodiments of the present invention. System 200 may implement the acts of the method of FIG. 1, for example, by processing unit 202 of server 204 executing code instructions stored in a program store 206. It is noted that in another implementation, one or more functions performed by server 204 may be stored in data repository 216 for execution by processing unit 212 of client terminal 208, for example, as a user application. System 200 includes server 204 which may be implemented, for example, as a central server, a computing cloud, a network server, a web server, as a stand-alone unit, as code installed on an existing computer, as a hardware card inserted into an existing computer, or other implementations. Server 204 may be implemented as a hardware component (e.g., standalone computing unit), as a software component (e.g., implemented within an existing computing unit), and/or as a hardware component inserted into an existing computing unit (e.g., plug-in card, attachable unit). Server 204 may provide services to client terminals 208 by providing software as a service (SAAS), providing an application that may be installed on client terminal 208 that communicates with server 204, and/or providing functions using remote access sessions (e.g., web server accessed by a web browser installed on client terminal 208).

Server 204 is in communication with multiple client terminals 208 over a network 210 (using respective client network interface 211 and server network interface 215), for example, the internet, a private network, a local area network, and/or a cellular network, using wireless and/or wired connections.

Exemplary client terminals 208 include, a mobile device, a desktop computer, a thin client, a Smartphone, a Tablet computer, a laptop computer, a server, a web server, a wearable computer, glasses computer, and a watch computer.

Client terminal 208 includes a processing unit 212 and a program store 214 storing code instructions for execution by processing unit 212.

Processing units 202, and/or 212 may be implemented, for example, as a central processing unit(s) (CPU), a graphics processing unit(s) (GPU), field programmable gate array(s) (FPGA), digital signal processor(s) (DSP), and application specific integrated circuit(s) (ASIC). Processing unit(s) 202, and/or 212 may include one or more processors (homogenous or heterogeneous), which may be arranged for parallel processing, as clusters and/or as one or more multi core processing units, for example, distributed across multiple virtual and/or physical servers, for example, located within a computing cloud and/or at multiple network connected processing nodes.

Program stores 206, and/or 214 store code instructions implementable by respective processing units 202, and/or 212, for example, a random access memory (RAM), read-only memory (ROM), and/or a storage device, for example, non-volatile memory, magnetic media, semiconductor memory devices, hard drive, removable storage, and optical media (e.g., DVD, CD-ROM).

Client terminal(s) 208, and/or server(s) 204, may include respective data repositories 216 and 218 (e.g., memory, hard drive, optical disc, storage device, remote storage server, cloud server). Data repository 216 of client terminal 208 may store a GUI application and/or web browser for accessing server 208. Data repository 216 may store interactive tool 216A. Data repository 218 of server 204 may store a template repository 218 A (that stores multi-media content templates for creation of the multimedia content object), a template fragment repository 218B (that stores multi-media fragments which may be arranged to create the multi-media content template), interactive tool 218C (which provides a GUI to users to create the multi-media content object), a multi-media object repository 218D (which stores the created multi-media content objects), rendering code 218E (which compiles a data structure to create the multi-media content object) and/or other data as described herein.

Client terminal 208 includes or is in communication with a user interface 220 (which may be integrated with a display 222, or be implemented as a separate device), for example, a touchscreen, a keyboard, a mouse, and voice activated software using speakers and microphone.

Client terminal 208 may include or be in communication with a camera 224, for example a web cam, a camera built into a Smartphone, an external still, a video camera, and/or a virtual reality camera (e.g., 180 or 360 degree spherical camera) (which may be connected using an interface, for example, a USB (universal serial bus) cable, an HMDI (High-Definition Multimedia Interface) cable, and/or a wireless link).

Optionally, a web server (or other server) 226 hosting a website 228 storing web pages (or other application such as a social network) accessed server 204 to provide interactive tool 218C to a user using client terminal 208 accessing a webpage on website 228. Multi-media content objects created by users may be stored on server 204 (e.g., in multi-media object repository 218D) and linked to website 228 (or stored locally at web server 226).

Web server 226 communicates with server 204 and client terminal(s) 208 over network 210 using web server interface 230. Web server 226 includes processing unit 232, program store 234, and includes or is in communication with a data repository 236. Processing unit 232, program store 234, and data repository 236 may be implemented, for example, as described with reference to server 204.

The acts of the method of FIG. 1 are described with reference to a certain client terminal 208. It is understood that multiple client terminals 208 may access server 204 (or otherwise execute code instructions implementing interactive tool 216A). The description with reference to one of the client terminals 208 is for clarity and simplicity, and is not meant to be limited to the described client terminal 208.

The acts of the method of FIG. 1 are described with reference to server 204 indirectly accessed by client terminal 208 via web server 226. It is understood that other implementations may used, for example, user of client terminal 208 directly accessing server 204 (e.g., using a web browser), user of client terminal 208 running locally stored intermediate tool 216A which accesses server 204, and/or user of client terminal 208 locally accessing intermediate tool 216A which locally implements the functions described with reference to server 204.

At 102, a user uses client terminal 208 to access a web page of web site 228 or an application stored on web server 226 (or locally stored on client terminal 208, or stored on another server, for example, a social network server), for example, using a web browser application, or another application stored on client terminal 208. Alternatively or additionally, the user uses client terminal 208 to access interactive tool 218C stored in server 204 (i.e., directly instead of via web site 228). In either case, the created multi-media content object may be linked to web site 228, and/or stored by client terminal 208.

Web server 226 may implement interactive tool 218C, for example, as a script executing within the code of website 228, a plug-in, an external file or applet, or accessed using a link (which may be manually selected by the user).

For example, the user may access a buy-and-sell web site or application featuring real-estate, physical items (e.g., cars, furniture, collectible items), and/or services (e.g., vacations, car wash, dentist, haircuts, and lawyers). In another example the user accesses a social network site or application. In yet another example, the user accesses a dating site or application. In yet another example, the user accesses server 204 directly (or via web server 226 or via an application locally stored on client terminal 208) to create, for example, an online television show (or series), a documentary, a music video, and/or other videos. In yet another example, the user accesses a traffic navigation application to receive instructions for driving from a hotel to a museum in a city the user is visiting. In yet another example, the user accesses a photo or video capturing application to capture images or videos at an event (e.g., wedding) the user is attending.

At 104, server 204 transmits instructions to client terminal 208 to present on display 222 a graphical user interface (GUI) of interactive tool 216A for creating a personalized multi-media content object. GUI implementing interactive tool 216A may be implemented, for example, as a separate window, within webpage of website 228, and/or other implementations.

Interactive tool 216A may be activated, for example, automatically when the user browses and/or enters data into website 228, for example, when the user selects an option to enter a listing for selling a home. Interactive tool 216A may be activated, for example, by the user manually selecting activation, for example, by accessing server 204 and/or locally executing interactive tool 216A.

Alternatively, interactive tool 216A is presented to the user after block 106 is executed, and/or after block 108 is executed. In such a case, interactive tool 216A is presented on the display of the client terminal of the user once the multi-media content template is determined according to the context-information based on an analysis of the gathered user session data.

At 106, server 204 receives user session data associated with a context- information for the multi-media content object. The user session data is indicative at least of the behavior of the user navigating the web site and/or selections made by the user when accessing the website and/or application, for example, the user typing a happy birthday message to another user (e.g., context-information includes sending a happy birthday message to the other user), the user browsing dating profiles on a dating website (e.g., context-information includes a dating profile for the user), the user taking pictures at a wedding (e.g., using a camera built into the client terminal), the user driving to a location (e.g., analyzed using a global positioning system and map application), and the user selecting offices for rent on a real estate website (e.g., context-information includes a promotion of the office available for rent by the user). The user session data for the context-information may be gathered automatically from web server 226 according to website 228 (and/or webpage) being navigated by client terminal 208, manually entered by the user (e.g., using the GUI of interactive tool 216A), automatically accessed from one or more third party servers (e.g., by crawling code or other code that accesses different servers such as weather servers, event serves, news servers, and/or other servers), automatically received based on feeds or other network messages transmitted by web server 226 or other servers to server 204, based on past user behavior and/or current user behavior patterns (e.g., according to an analysis of user social network web site data, a history of websites the user visited, and/or a history of previously created multi-media content objects by the same user or another user).

The user session data may include one or more of: the currently accessed webpage (e.g., the webpage open in the browser running on the client terminal of the user), the currently accessed application (e.g., the application running on the client terminal of the user), one or more historical web pages accessed previously to the currently accessed webpage, and one or more historical application accessed previously to the currently accessed application.

The user session data for the context-information and/or the context-information itself may be represented by one or more text and/or words and/or other data structures, for example, a single word, a hierarchy of words, a record, values for parameters, and/or other data structures. For example data for the context-information may be automatically extracted from content of the website (or other data), from tags associated with the website (or other data), from user entered data, from metadata associated with the website (or other data) and/or other methods. The context-information may be determined (e.g., by server 204) based on the data, for example, using a set-of-rules, a statistical classifier, a look-up-table, a hash function, or other mapping methods.

The context-information may represent the goal, purpose and/or type of the created multi-media content object. The context-information may be based on the goal, purpose, and/or type of web site, web page, or application being accessed by the user using the client terminal. Exemplary context-information include: a testimonial (e.g., about a product or service being offered by web site 228), a property for rent or sale (e.g., based on the user visiting a buy and sell website), a product for sale, a documentary, a television series, a video message about a certain social theme (e.g., birthday, wedding, anniversary, and college graduation, for example, based on the user visiting a social networking site), a service offering, and a dating video of the user promoting him/herself (e.g., based on the user accessing a dating website).

The context-information may be a hierarchical and/or multi-dimensional, and/or narrower, for example (from general or single level, to more specific and/or multidimensional or multi-level): a property for sale, a house for sale, a house with a garden for sale in a community with young families, a house with a garden for sale in a community with young families when 67% of the residents have academic degrees and 82% of households own two cars. The context-information may represent for example, increased user engagement (e.g., additional dates), content enrichment (e.g., of a social network site, such as by presenting a happy birthday video), and/or commercial benefits using ads (e.g., to try and sell a product and/or service).

Optionally, the context-information is automatically determined according to content of the website 228 accessed by client terminal 208. For example, the user of client terminal 208 is accessing a website for posting properties for sale, and filled in a questionnaire of the type of property being sold. Alternatively or additionally, the context-information is extracted from data manually entered by the user using interactive tool 216A in response to presented requests for the data and/or data and/or content items the user provided (e.g., manually entered and/or uploaded) to the web site, for example, the GUI on the website presents a questionnaire about the property being sold for the user to fill out. Interactive tool 216A may request additional detail from the user using the GUI and/or trigger code (or access a database) to obtain additional data, for example, crime statistics of the street on which the house is being sold (e.g., from a data server), nearby amenities, socioeconomic ranking of the neighborhood, and traffic patterns.

Optionally, the context-information is automatically extracted from user session data. The user session data may store user behavior, contextual data about the user, user links, user preferences, friends of the user, The user session data may be stored by server 204 in data repository 218 (e.g., in a database, for example, tracking user behavior associated with different websites), locally by client terminal 208 (e.g., in data repository 216, for example, tracking user behavior in using client terminal 208), by website 228 (e.g., in data repository 236, for example, tracking user behavior on website 228).

Exemplary user session data may include one or more of: a social graph of the user (e.g., created by sever 204 based on an analysis of user related data and/or extracted from a social network server), application usage history of the user (e.g., tracked by code executed locally on client terminal 208), web sites visited by the user, (e.g., tracked by code executed locally on client terminal 208 and/or by web server 226 and/or by server 204), and geographical location of the user (e.g., extracted from a geographical positioning element locally located on client terminal 208 (e.g., GPS), derived based on access details of client terminal 208 (e.g., what cellular site is being used), and/or based on manually entered user data).

At 108, server 204 determines a multi-media content template according to an analysis of the data associated with the context-information. The multi-media content template may be determined according to the determined context-information. The multi-media content template includes instructions for creating a customized multimedia content object in accordance with the context-information. The instructions provided by the multi-media content template are designed to provide a professional looking object (e.g., video), even when created by amateur users using a webcam.

The multi-media content template may be stored in template repository 218 A, for example, as scripts, as records, as values defined for parameters, as code, as database entries, or other implementations. For example, the template may be implemented as a GUI page with fields that are filled in by the user, for example, a title and image. For example, the template is implemented as code that enforces a common parameter on one or more objects, for example, a common look, fill, and/or structure, for example, a common color, a vintage look, or a common font. In another example, the template and/or template fragment is implemented as code instructions that when executed by a processor performs functions as a generalized template, for example, code to request a content item from a user using a pop-up user interface widget. The code performs the instructions defined by the templates and/or template fragments, or implements the definitions of the templates and/or template fragments described herein. Alternatively or additionally, the multi-media content templates may be dynamically created according to the context-information, optionally assembled from template fragments (which may be stored in template fragment repository 218B).

The multi-media content template may be selected and/or dynamically created, for example, based on a statistical classifier that maps the context-information and/or data associated with the context- information to a certain multi-media content template or a set of template fragments (which are assembled into the template), by a mapping function, based on manual user selection (e.g., presenting different templates, such as a set of initially determined templates for the user to select from), by code analyzing the context-information and/or data, by a look-up table, and/or other methods.

The multi-media content template defines automatic instructions for implementation by code to edit user provided multi-media content items, for example, to apply special effects (e.g., apply different filters, adjust colors, apply special effects, apply formatting, transitions, adjustment of lighting, apply background). Alternatively or additionally, the multi-media content template defines insertion of additional stored content provided by other users and/or by entities other than the user providing the user content items (e.g., locally stored on data repository 218, accessed from another remote server such as content repository server 240), for example, images, sound effects, videos, opening scenes, closing scenes). Alternatively or additionally, the multi-media content template defines manual instructions to the user. The manual instructions may include, for example, instructions on what to film, how to film (e.g., angle, lighting), where to film, how to film, and/or what to say (e.g., presenting text to say on the screen, printing the text to read). The manual instructions may be presented on display 222 as text, printed out, or as audio instructions (e.g., using speakers).

Exemplary instructions defined by the determined multi-media content template includes one or more of: overall style, animation, show opener, show closure, title style, composition layout of user provided content item(s), sound track, sound effects, mixing, instructions to change voice recording provided by the user, overlay widgets, transition style, special effects, overall script, and background. Instruction may apply design themes to provide an overall (or for certain portions) look, for example, a television-like overlay may look different in a comic book theme in comparison to a vintage theme. The design theme may be applied to the same user content items, providing a different look to the same content items. Different multi-media content objects created by different users based on the same or similar determined multi-media content template are based on exemplary common attributes including one or more of: design, structure, composition, look, and sound.

Optionally, multiple template fragments are determined and automatically assembled to create the multi-media content template. Each template fragment may include its own set of instructions (e.g., stored as code implementable by the processing unit of the server) to apply to the user content items associated with the respective template fragment. Sub-sets of template fragments, or the entire set of fragments assembled to create the multi-media content template may include instructions to apply to all user content items associated with the sub-set and/or globally to all user contents items, for example, to provide an overall theme or look, and/or a theme or look to certain portions of the created object. Formatting instructions define how the different template fragments (and/or the objects created based on the instructions of the template fragments) are assembled and/or formatted, for example, a design theme, insertion of additional content items (e.g., from storage and/or from external servers other than the client terminal of the user).

Each template fragment may represent, for example, a time portion of the created multi-media content object, a scene, a single instruction to apply to a single user provided content item, a set of instructions to apply to a single user provided content item, and/or a single or set of instructions to apply to multiple user provided content items.

The determined multi-media content template includes instructions for applying formatting and/or editing based on a common theme to the determined template fragments assembled to create the multi-media content object.

Exemplary template fragments include: instructions for rendering one or more user provided content item(s) into a portion of the multi-media content object (e.g., a scene, a range of time), canned template fragments defining how multiple user provided content items are composed together into at least a portion of the multi-media content object (e.g., arrangement of order of scenes, special arrangement of content items appearing simultaneously on the display. In another example, a fragment of a video in which a person is speaking is segmented from the original background and placed on a different background), transitional templates defining transitions between rendering of portions of the multi-media content object (e.g., scene transitions using split screens and overlays) based on other rendered template fragments, and motion templates including instructions defining rendering of motion objects (and/or animations) within rendered template fragments of the multi-media content object (e.g., used for clip opening/closing, animation of titles, and animation of other widgets and/or content items).

At 110, one or more user provided content item are received by server 204.

The user provided content items are based on the instructions associated with the determined multi-media content template. Exemplary user provided content items include: still images captured by the user (e.g., using camera 224), videos (e.g., using camera 224), audio recordings (e.g., using microphone user interface 220), text (e.g., typed using keyboard or touch screen user interface 220), uploads of previously created stored content items (e.g., stored in data repository), links to previously created content items stored online (e.g., in a computing cloud, on web server 226, on a social network server, and/or on an external storage device), and a selection made in a GUI (e.g., checking boxes of lists of items, and clicking on icons).

The user provided content items may be transmitted from client terminal 208 using interactive tool 216A, for example, recording using the GUI based on the instructions provided by the designated multi-media content template and transmitted to server 204. The user provided content items may be pre-created items already stored on server 204 (e.g., in data repository 218). Links to externally stored content items may be transmitted from client terminal 208 (or from an external server) to server 204. The content items may be downloaded from the external server to server 204.

Instructions for capturing or providing the content items may be presented within the GUI of interactive tool 216A before or simultaneously with creation of the content items. For example, text may appear within the GUI while the webcam is recording, to guide what the user should say and how the user should face the camera. For example, the text may present on the screen the instructions "The camera will turn on in 10 seconds. Face straight on and read the upcoming text" . The following text then appears for the user to read while the camera is recording "Hello my name is (say your name}. I am selling my home. Let me show you around". At 112, server 204 automatically creates the multi-media content object by processing the user provided content item(s) according to the determined multi-media content template. Server 204 automatically formats, edits, and/or adds additional content to the user provided content item(s) according to instruction code stored in association with the determined multi-media content template.

The content object may include references to external data sources, for example, media files, template fragments, and records. The external data sources may be stored, for example, in an external database stored on an external storage device, which may be located on a network node located in a different geographical location, within a computing cloud, or on a server.

Optionally, server 204 creates a data structure (e.g., script, text file, code, based on JSON (JavaScript Object Notation), and/or XML (Extensible Markup Language)) storing a representation of the multi-media content object based on the instructions associated with the multi-media content template applied to the user provided content item(s). The data structure includes instructions for rendering the multi-media content object from the content item according to the determined multi-media content template.

The data structure may include temporal information (e.g., mark-in, mark-out) of fragments and/or portions of the multi-media content object, composition information (e.g., coordinates, z-index), and special effects to be applied on certain fragments and/or portions (e.g., scenes, range of time) of sets of fragments and/or sets of portions of the multi-media content object.

The multi-media content object may be automatically created by compiling the data structure using rendering instruction code 218E into the multi-media content object, optionally a playable file based on one or more standard formats (e.g., in a web browser).

Rendering instruction code 218E may compile the data structure, for example, by parsing the data structure to identify instructions (e.g., based on a recognizable instruction syntax), collecting the user provided content item and/or other content items (e.g., by downloading the content item using a link to a remote server) according to the identified instructions (e.g., matching each content item to one or more instructions), rendering multi-media fragments by processing the user provided content items according to the instruction, and assembling the rendered multi-media fragments into the multi-media content object according to the identified instructions.

At 114, the created multi-media content object is outputted by server 204 for presentation. Optionally the created multi-media content object is a single file (or multiple associated files).

The multi-media content object may be stored in multi-media object repository 217D, transmitted over network 210 for local storage on client terminal 208 and/or storage on web server 226, and/or presented on display 222 associated with client terminal 208 (that created the multi-media content object).

Multi-media content object may be made available for presentation to other client terminals 208. Optionally, multi-media content object is automatically linked (e.g., by web server 226) to the web page of website 228 (e.g., accessed by the user and from which interactive tool 218C was activated, as described with reference to block 102). For example, other users accessing website 228 to view real-estate listings may view the multi-media content object on their respective displays 222.

The multi-media content object may be, for example, posted online (e.g., in web site 228) as a video post, embedded in feeds, and/or transmitted to external servers (e.g., online video servers) with associated metadata.

It is noted that the order of the actions described with reference to FIG. 1 are exemplary and not necessarily limiting. One or more blocks 102-114 may be performed as listed in FIG. 1, in parallel, and/or in another sequence, for example, the transmission described with reference to block 104 and the determining described with reference to block 108 may be interchanged (i.e., performing the determining in block 104 and the transmitting in block 108), for example, when the determining of the multi-media content template is performed by the client terminal. In another example, when a common GUI used to guide creation of the multi-media content object (block 112) is used by the multi-media content templates, the common GUI may be presented before the determining and transmitting acts. In such a scenario, the user provided content items (block 110) may be performed before the determining block of 108.

Reference is now made to FIG. 3, which is a dataflow diagram conceptually depicting dataflow for dynamically creating an interactive tool for creating multi-media content object, in accordance with some embodiments of the invention. The dataflow diagram of FIG. 3 relates to flow of data as discussed with reference to the method described with reference to FIG. 1 and/or system 200 described with reference to FIG. 2.

User 302 uses a client terminal to access application 304 (e.g., web page hosted by a web server, a dedicated application for creating multi-media content objects hosted by a server and/or locally stored on the client terminal, and/or other applications). A session 306 of a dynamically created interactive tool for creating multi-media content objects is initiated. Application 304 analyzes user contextual data 308 (e.g., social graph, application navigation details, web browsing history, and geographical location) and/or application contextual data 310 (e.g. commercial offerings, weather, data, and time) to identify a context-information for determination of a multi-media content template. A message including a request for user input and/or user provided content items (e.g., media items) 312 is transmitted from application 304 to user 302. User 302 provides the data and/or content items 314, which may be stored in a user upload repository 316. Clip generation logic code 318 receives clip generation resources 320 including the multi-media content template based on the identified context-information from a clip generation service 322 (e.g., server). The template includes instructions for formatting, applying design themes, arranging clip template fragments, and/or inserting pre-stored media content items from a repository.

The determined template is provided to clip generation logic 318 (e.g., the template may be accessed using an index 324 from a set of predefined templates). Clip generation logic 318 creates a clip specification document 326 (e.g., file) that includes instructions to apply the determined template to the user provided content items. Clip specification document 326 is provided to a clip renderer 328 (e.g., code) that compiles the multi-media content object (e.g., clip) according to clip specification document 326. The compiled multi-media content object is stored in a compiled clips repository 330 and made available for distribution 332 and access to other users.

An example is now discussed to illustrate the systems and/or methods described herein, for example, system 200 as described with reference to FIG. 2 and/or the method described with reference to FIG. 1.

A user uses logs into a rent lodging listing marketplace application (e.g., web site) via a mobile device. The user is a seller with a lodging property to list for rent. The marketplace site provides the dynamically created interactive tool as a GUI presented on the mobile device for listing assets. The interactive tool gathers the following data from the user: property location, type of property (e.g., condo, studio apartment, house, townhome) and optionally other characteristics (e.g., size, view type, amenities), photos/clips (e.g., of the kitchen, living room, bedrooms, bathroom), acceptable rent duration, and pets allowed/disallowed.

The application server selects a multi-media content template from different available predefined templates. The application uses business logic code instructions to select the best template according to data provided by the user. For example, a template suitable for houses is selected over a template suitable for condos, a template for a long term rental is selected over a template for a short term rental, and a template is selected according to properties of the provided user media items, such as based on amount, quality, soundtrack or governing color scheme.

The template provides instructions defining characteristics of the entire created multi-media content object or a portion thereof, such as soundtrack, duration, opener/ending style and graphics, titles style, transition style, effects to be applied and script.

The application server customizes the determined template by creating an object specification document. Customization may include embedding media from a remote entity related to the location of the property (e.g., photos of Big Ben & Thames for a condo near Westminster, or the Eiffel tower & Arc De Triomphe for a Paris property). Content items provided by the user are embedded and/or applied special effect according to instructions defined by the determined template. The application server may prepare multiple multi-media content object variants which may be based on multiple different determined templates for the user to select the final multi-media content object from. The multi-media content object is included in the listing of the property for promotional purposes.

In another example, which is a variation of the above described example, the user may access a website hosting the listings. When the user is a past renter, the determined template may relate to creation of a multi-media content object based on the context-information of a testimonial. The user may provide photos of the property during the rental period, and audio or video recording of the user providing a review of the property. A testimonial multi-media content template is selected.

In yet another example, which is a variation of the described example, the application is a social network site that allows users to post content (e.g. messages) for public distribution, personal distribution, and/or group distribution. Suitable templates may be selected according to context-information, for example, to provide a status update, provide a wisdom of the day message, a product review, a top 3 favorite places to visit, gossip about celebrities, and other templates which may be based on user provided context-information data and/or automatically identified context-information data.

Reference is now made to FIGs. 4A-4D, which are exemplary screen shots of a GUI implementing the dynamically created interactive tool for creation of multi-media content objects using the multi-media template, in accordance with some embodiments of the present invention.

FIG. 4A is a screenshot of a GUI interactive tool 400. The user may enter data into fields of tool 400, for example, entering the presenter's name in field 402 and the name of the show in field 404. Tool 400 may dynamically present a preview of the created multi-media content object in a window 406 as the user provides data. Tool 400 may provide additional customization options, such as the option to provide opening sound 408. A progress bar 410 shows progress of the user in creating the multi-media content object.

FIG. 4B is another screenshot 420 of GUI interactive tool 400 of FIG. 4A that directs the user to create the story. GUI 420 follows the GUI of FIG. 4A as indicated by the progress in progress bar 410. The user may provide content items using tool 420, by clicking on image box 422 to add stored images, clicking on text box 424 to add text, and clicking on person silhouette 426 to record him/her using a video camera.

FIG. 4C is another screenshot 430 of GUI interactive tool 400 of FIG. 4A that helps select stored content for insertion into the multi-media content object. The user may term search terms into a search field 432 to search for content items (e.g., videos, animations, rendered images), which may be stored remotely on an external server.

FIG. 4D is another screenshot of 440 of GUI interactive tool 400 of FIG. 4A that presents a summary and preview of components of the multi-media content object. Tool 400 may present previews of the user content items provided by the user using GUI 420 of FIG. 4B. Screen preview window 442 presents a preview of the current frame of the created multi-media content object. Screen preview window 444 presents a preview of one or more of the user provided content items. Screen preview window 446 presents a preview of the background. Frame window 448 presents a frame by frame preview of components of the created multi-media content object.

The descriptions of the various embodiments of the present invention have been presented for purposes of illustration, but are not intended to be exhaustive or limited to the embodiments disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein was chosen to best explain the principles of the embodiments, the practical application or technical improvement over technologies found in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

It is expected that during the life of a patent maturing from this application many relevant multi-media content objects will be developed and the scope of the term multimedia content object is intended to include all such new technologies a priori.

As used herein the term "about" refers to + 10 %.

The terms "comprises", "comprising", "includes", "including", "having" and their conjugates mean "including but not limited to". This term encompasses the terms "consisting of" and "consisting essentially of".

The phrase "consisting essentially of" means that the composition or method may include additional ingredients and/or steps, but only if the additional ingredients and/or steps do not materially alter the basic and novel characteristics of the claimed composition or method.

As used herein, the singular form "a", "an" and "the" include plural references unless the context clearly dictates otherwise. For example, the term "a compound" or "at least one compound" may include a plurality of compounds, including mixtures thereof.

The word "exemplary" is used herein to mean "serving as an example, instance or illustration". Any embodiment described as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments and/or to exclude the incorporation of features from other embodiments. The word "optionally" is used herein to mean "is provided in some embodiments and not provided in other embodiments". Any particular embodiment of the invention may include a plurality of "optional" features unless such features conflict.

Throughout this application, various embodiments of this invention may be presented in a range format. It should be understood that the description in range format is merely for convenience and brevity and should not be construed as an inflexible limitation on the scope of the invention. Accordingly, the description of a range should be considered to have specifically disclosed all the possible subranges as well as individual numerical values within that range. For example, description of a range such as from 1 to 6 should be considered to have specifically disclosed subranges such as from 1 to 3, from 1 to 4, from 1 to 5, from 2 to 4, from 2 to 6, from 3 to 6 etc., as well as individual numbers within that range, for example, 1, 2, 3, 4, 5, and 6. This applies regardless of the breadth of the range.

Whenever a numerical range is indicated herein, it is meant to include any cited numeral (fractional or integral) within the indicated range. The phrases "ranging/ranges between" a first indicate number and a second indicate number and "ranging/ranges from" a first indicate number "to" a second indicate number are used herein interchangeably and are meant to include the first and second indicated numbers and all the fractional and integral numerals therebetween.

It is appreciated that certain features of the invention, which are, for clarity, described in the context of separate embodiments, may also be provided in combination in a single embodiment. Conversely, various features of the invention, which are, for brevity, described in the context of a single embodiment, may also be provided separately or in any suitable subcombination or as suitable in any other described embodiment of the invention. Certain features described in the context of various embodiments are not to be considered essential features of those embodiments, unless the embodiment is inoperative without those elements.

Although the invention has been described in conjunction with specific embodiments thereof, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, it is intended to embrace all such alternatives, modifications and variations that fall within the spirit and broad scope of the appended claims.

All publications, patents and patent applications mentioned in this specification are herein incorporated in their entirety by reference into the specification, to the same extent as if each individual publication, patent or patent application was specifically and individually indicated to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting.




 
Previous Patent: NAIL POLISH CAPSULE

Next Patent: AUTOMATICALLY CLOSING DISPENSER