Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
GENERATION OF COMBINED VIDEOS
Document Type and Number:
WIPO Patent Application WO/2016/134415
Kind Code:
A1
Abstract:
A method of generating video data on a portable electronic device, the method including steps of: the portable electronic device accessing pre-generated data representing a pre-generated video synchronized with pre-generated audio; the portable electronic device accessing user-generated content (UGC) data representing a user-generated photo or video generated by a camera of the portable electronic device; and the portable electronic device generating combined data representing a combined video that includes a portion of each of the pre-generated video, and the user-generated photo or video.

Inventors:
PALMER DECLAN LEWIS COUSINS (AU)
BERWICK STUART PAUL (AU)
PALMER BARRY JOHN (AU)
Application Number:
PCT/AU2016/050117
Publication Date:
September 01, 2016
Filing Date:
February 22, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ZUMA BEACH IP PTY LTD (AU)
International Classes:
H04N5/262; G11B27/022; G11B27/034; H04N5/272; H04N5/28; H04N21/85
Foreign References:
US20100031149A12010-02-04
US20120308209A12012-12-06
US20140245334A12014-08-28
US20130305287A12013-11-14
Other References:
DEBORD S.: "3 Steps to Professional Quality Listing Videos From Your Smartphone", 17 October 2014 (2014-10-17), Retrieved from the Internet [retrieved on 20160329]
Attorney, Agent or Firm:
DAVIES COLLISON CAVE (Melbourne, Victoria 3000, AU)
Download PDF:
Claims:
- 25 -

THE CLAIMS DEFINING THE INVENTION ARE AS FOLLOWS:

1. A method of generating video data on a portable electronic device, the method including steps of:

the portable electronic device accessing pre-generated data representing a pre- generated video synchronized with pre-generated audio;

the portable electronic device accessing user-generated content (UGC) data representing a user-generated photo or video generated by a camera of the portable electronic device; and

the portable electronic device generating combined data representing a combined video that includes a portion of each of the pre-generated video, and the user-generated photo or video.

2. The method of claim 1, including the portable electronic device accessing transition data representing a transition image or transition video, and generating the combined data by including the transition image or transition video in the combined video.

3. The method of claim 2, wherein the generating step includes the portable electronic device generating a transition component from the pre-generated video to the user- generated photo or video, or from the user-generated photo or video to the pre-generated video, based on a transition shape in the transition image, wherein the transition image defines one or more two-dimensional (2D) shapes, wherein the transition shape includes a plurality of regions, wherein the transition image includes pixel values defining masks in the transition image, wherein the generating step includes a step of generating a masking transition between the user-generated video or image and the video based on the image mask, and/or, wherein the portable electronic device accesses the transition data on the remote server system using a telecommunications network. - 26 -

4. The method of claim 2 or 3, including a step of generating intermediate transition data representing an intermediate transition video including a plurality of video frames based on the transition image, wherein the generating step includes a step of combining the intermediate transition video with at least the portion of each of the pre-generated video and the user-generated image or video.

5. The method of any one of claims 2 to 4, wherein the combined data represent a plurality of seconds of the pre-generated video, a plurality of seconds of the transition image or video, and a plurality of seconds of the user- generated image or video, synchronized with the pre-generated audio.

6. The method of any one of claims 1 - 5, including the portable electronic device generating combined data by synchronizing at least a portion of the pre-generated audio with each of the pre-generated video, and the user- generated photo or video.

7. The method of any one of claims 1 - 5, including the portable electronic device generating combined data by synchronizing at least a portion of user-generated audio, from the UGC, with each of the pre-generated video, and the user-generated photo or video.

8. The method of any one of claims 1 - 7, including the portable electronic device accessing the pre-generated video from a remote server system.

9. The method of any one of claims 1 - 7, including the portable electronic device accessing the pre-generated video on the portable electronic device, wherein the pre- generated video is generated from a user-generated photo or video on the portable electronic device.

10. The method of any one of claims 1 - 9, including a step of the portable electronic device fading in the pre-generated audio over a fade-in duration at a start of the combined video to generate the combined data; and/or including a step of the portable electronic device fading out the pre-generated audio over a fade-out duration at an end of the combined video to generate the combined data. - 27 -

11. The method of any one of claims 1 - 9, including a step of the portable electronic device fading in UGC audio in the UGC data over a fade-in duration at a start of the combined video to generate the combined data; and/or including a step of the portable electronic device fading out the UGC audio over a fade-out duration at an end of the combined video to generate the combined data.

12. The method of any one of claims 1 to 11, including a step of the portable electronic device cross-fading the pre-generated audio to the user-generated audio, and/or cross- fading the user-generated audio to the pre-generated audio, over at least one cross-fade duration in at least one corresponding intermediate portion of the combined video to generate the combined data.

13. The method of any one of claims 1 to 12, including a step of the portable electronic device accessing, on the portable electronic device or from the remote server system, watermark data representing a watermark image or video, and the generating step including the portable electronic device inserting the watermark image or video into the combined video.

14. The method of claim 13, wherein the watermark is inserted into at least a portion of the pre-generated video, and/or at least a portion of the user-generated image or video, wherein the watermark image or video is placed over the user-generated video or image, wherein the watermark image or video is anywhere on at least one portion of the user-generated image or video and/or on the pre-generated video, and/or wherein the watermark is on the bottom or on the right-hand side or in the bottom right-hand corner of the user-generated video or image.

15. The method of claim 13 or 14, including a step of generating intermediate watermark data representing an intermediate watermark video including a plurality of video frames based on the watermark image, and the generating step including a step of - 28 - combining the intermediate watermark video with at least the portion of each of the pre- generated video and the user-generated image or video.

16. The method of any one of claims 1 to 15, including a step of generating intermediate UGC data representing an intermediate UGC video including a plurality of video frames based on the user-generated image, and the generating step including a step of combining the intermediate UGC video with at least the portion of the pre-generated video.

17. A method for generating a combined video, the method including steps of:

a portable electronic device accessing, on the portable electronic device, user- generated content (UGC) data that represent a user-generated image or video;

the portable electronic device accessing, from a remote server system, externally generated content (EGC) data that represent a pre-generated video including pre-generated audio;

the portable electronic device accessing, on the portable electronic device or from the remote server system, transition data that represent a transition image or video; and the portable electronic device generating combined data representing a combined video by combining at least a portion of each of the user-generated image or video and the pre-generated video.

18. A method of generating video data, the method including steps of:

a portable electronic device accessing pre-generated data representing a pre- generated video synchronized with pre-generated audio;

the portable electronic device accessing user-generated content (UGC) data representing a user-generated photo or video generated by a camera of the portable electronic device;

the portable electronic device accessing transition data representing a transition image; and

the portable electronic device generating combined data representing a combined video that includes a portion of each of the pre-generated video, the user-generated photo - 29 - or video, and the transition image, synchronised with at least a portion of the pre-generated audio.

19. An apparatus configured to perform the method of any one of claims 1 - 18.

20. A system configured to perform the method of any one of claims 1 - 18.

21. Machine-readable media including machine-readable instructions that control one or more electronic microprocessors to perform the method of any one of claims 1- 18.

22. A portable electronic device configured to perform the method of any one of claims 1 - 18.

Description:
GENERATION OF COMBINED VIDEOS

TECHNICAL FIELD

[01] The present invention generally relates to apparatuses, devices, systems, machine-readable media, and methods for generation of combined videos by combining at least portions of a plurality of pre-existing images and/or videos.

BACKGROUND

[02] Existing professional video-editing and generation packages used by movie studios, or video production houses, are generally inefficient and difficult to use for small- scale applications, e.g. , users without technical training with hand-held portable electronic devices such as portable smartphones and tablets (which may be iPhones or iPads from Apple Inc, or Galaxy products from Samsung Group, or Lumia products from Microsoft Corporation).

[03] Existing smartphone applications (which may be known as "apps") may allow for combination of video files on the smartphones; however, these applications are limited to simply concatenating existing files on the smartphone, and generation of richer videos incorporating images and/or video from more varied sources is slow and/or laborious and/or impossible.

[04] It is desired to address or ameliorate one or more disadvantages or limitations associated with the prior art, or to at least provide a useful alternative.

SUMMARY

[05] In accordance with the present invention, there is provided a method of generating video data on a portable electronic device, the method including steps of: a. the portable electronic device accessing pre-generated data representing a pre-generated video synchronized with pre-generated audio; b. the portable electronic device accessing user-generated content (UGC) data representing a user-generated photo or video generated by a camera of the portable electronic device; and c. the portable electronic device generating combined data representing a combined video that includes a portion of each of the pre-generated video, and the user-generated photo or video.

[06] The present invention also provides a method for generating a combined video, the method including steps of:

a portable electronic device accessing, on the portable electronic device, user- generated content (UGC) data that represent a user-generated image or video;

the portable electronic device accessing, from a remote server system, externally generated content (EGC) data that represent a pre-generated video including pre-generated audio;

the portable electronic device accessing, on the portable electronic device or from the remote server system, transition data that represent a transition image or video; and

the portable electronic device generating combined data representing a combined video by combining at least a portion of each of the user-generated image or video and the pre-generated video.

[07] The present invention also provides a method of generating video data, the method including steps of:

a portable electronic device accessing pre-generated data representing a pre- generated video synchronized with pre-generated audio;

the portable electronic device accessing user-generated content (UGC) data representing a user-generated photo or video generated by a camera of the portable electronic device;

the portable electronic device accessing transition data representing a transition image; and

the portable electronic device generating combined data representing a combined video that includes a portion of each of the pre-generated video, the user- generated photo or video, and the transition image, synchronised with at least a portion of the pre-generated audio.

[08] The present invention also provides: apparatuses, portable electronic devices, and computer systems configured to perform the above methods; and machine-readable media including machine-readable instructions to control one or more electronic microprocessors to perform the above methods.

BRIEF DESCRIPTION OF THE DRAWINGS

[09] Preferred embodiments of the present invention are hereinafter described, by way of non-limiting example only, with reference to the accompanying drawings, in which:

[10] Figure 1 is a schematic diagram of a system for generating combined videos;

[11] Figure 2 is a block diagram of software modules and data structures in the system;

[12] Figure 3 is a diagram of components in the combined videos;

[13] Figure 4 is a flowchart of a method of video generation performed by the system; and

[14] Figures 5A to 5C is a flowchart of a method of generating a combined video performed by the system.

DETAILED DESCRIPTION

Overview

[15] Described herein is a method for generating a combined video, the method including steps of: a portable electronic device accessing, on the portable electronic device, user-generated content (UGC) data that represent a user-generated image or video; the portable electronic device accessing, from a remote server system, externally generated content (EGC) data that represent a pre-generated video including pre-generated audio; the portable electronic device accessing, on the portable electronic device or from the remote server system, transition data that represent a transition image or video; and the portable electronic device generating combined data representing a combined video by combining at least a portion of each of the user-generated image or video and the pre-generated video.

[16] The generating step may include the portable electronic device synchronizing the user-generated image or video with at least a portion of the pre-generated audio.

[17] The method may include a step of the portable electronic device storing the

UGC data on the device using a camera of the device.

[18] The method may include a step of the portable electronic device fading in the pre-generated audio over a fade-in duration at a start of the combined video to generate the combined data. The method may include a step of the portable electronic device fading out the pre-generated audio over a fade-out duration at an end of the combined video to generate the combined data. The method may include a step of the portable electronic device cross-fading the pre-generated audio to the user-generated audio, and/or cross- fading the user-generated audio to the pre-generated audio, over at least one cross-fade duration in at least one corresponding intermediate portion of the combined video to generate the combined data.

[19] The method may include a step of the portable electronic device accessing, on the portable electronic device or from the remote server system, watermark data representing a watermark image or video, and the generating step may include the portable electronic device inserting the watermark image or video into the combined video. The watermark may be inserted into at least a portion of the pre-generated video, and/or at least a portion of the user-generated image or video. The watermark image or video may be placed over the user-generated video or image. The watermark image or video may be anywhere on at least one portion of the user-generated image or video and/or on the pre- generated video. Alternatively, the watermark may be on the bottom or on the right-hand side or in the bottom right-hand corner of the user-generated video or image. [20] The method may include a step of generating intermediate UGC data representing an intermediate UGC video including a plurality of video frames based on the user-generated image, and the generating step may include a step of combining the intermediate UGC video with at least the portion of the pre-generated video.

[21] The method may include a step of generating intermediate transition data representing an intermediate transition video including a plurality of video frames based on the transition image, and the generating step may include a step of combining the intermediate transition video with at least the portion of each of the pre-generated video and the user-generated image or video.

[22] The method may include a step of generating intermediate watermark data representing an intermediate watermark video including a plurality of video frames based on the watermark image, and the generating step may include a step of combining the intermediate watermark video with at least the portion of each of the pre-generated video and the user-generated image or video.

[23] The combined data may represent a plurality of seconds of the pre-generated video, a plurality of seconds of the transition image or video, and a plurality of seconds of the user-generated image or video, synchronized with the pre-generated audio. The UGC data may represent a locally stored video or a locally stored image on the portable electronic device. The UGC data may be an image file or a video file. The UGC image may be a photograph. The transition data may represent a transition video or a transition image. The method may include steps of accessing the EGC data and the transition data on the remote server system using a telecommunications network.

[24] The transition image may define one or more two-dimensional (2D) shapes.

[25] The portable electronic device may be a smartphone or tablet computer with a communications module that communicates over the Internet, e.g. , using a WiFi or cellular telephone protocol. The portable electronic device is a form of physical, electronic apparatus, and acts as a component in a computer system that includes other components (including the remote server) in electronic communication with each other. The steps of the methods described herein are performed under the control of one or more electronic microprocessors that follow machine-readable instructions stored on machine-readable media (e.g. , hard disc drives).

[26] The remote server system may include a content management system (CMS) that provides access to the stored EGC data.

[27] Also described herein is a method of generating video data, the method including steps of: a portable electronic device accessing pre-generated data representing a pre-generated video synchronized with pre-generated audio; the portable electronic device accessing user-generated content (UGC) data representing a user-generated photo or video generated by a camera of the portable electronic device; the portable electronic device accessing transition data representing a transition image; and the portable electronic device generating combined data representing a combined video that includes a portion of each of the pre-generated video, the user-generated photo or video and the transition image, synchronised with at least a portion of the pre-generated audio.

[28] The generating step may include the portable electronic device generating a transition component from one of the pre-generated video to the user-generated photo or video, or from the user-generated photo or video to the pre-generated video, based on a shape in the transition image. The shape may include a plurality of regions. The transition image may include pixel values defining masks in the transition image. The generating step may include a step of generating a masking transition between the user-generated video or image and the video based on the image mask. In embodiments, the transition is a transparent image (PNG) or video (MP4) that is uploaded to the CMS. This image may be converted into a corresponding video, as described hereinafter, e.g. , a 2-second key frame animation.

[29] The described methods may allow for one or more of:

[30] combining the two pieces of media content (the UGC and the EGC) directly on the device, where one is delivered to the user and one created by the user, thus giving the user the ability to add their content to content delivered to them (e.g. , using Apple's iOS frameworks as described hereinafter, or HTML5 for Web, or Java and C++ for Android operating systems);

[31] combining the two pieces of media content on a server (remote from the device) using Adobe's "Flash" platform, and converting to a device-friendly format (e.g. , "MP4" format) prior to transmitting the combined video to another device for sharing;

[32] the transition between the two pieces of content provide a branded media piece using a brand image or a brand video pre-selected and delivered via the remote server system; and

[33] rapid and easy-to-use sharing of pre-existing footage that the user wishes to share, and the footage may be quality video having pre-generated high-quality audio, that may associate the user with a live event or location.

System 100

[34] As shown in Figure 1, a system 100 for generation of combined videos includes a client side 102 and a server side 104. The client side 102 interacts with at least one user and at least one administrator of the system 100. The server side 104 interacts with the client side 102. The client side 102 sends data to, and receives data from, the server side 104.

[35] The administration portal 106 sends (or uploads) event data and event media data from the client side 102 to the server side 104 based on input from the administrator. The uploaded event data and event media may represent the event name, date and location.

[36] The client side 102 includes an administration portal 106 that receives analytics data and event data from the server side 104 for use by the administrator. The analytics and event data may represent any one or more of: time, date, location, number of pieces of content, number of views, number of shares, networks shared to, number of people 'starring' the event and social profile of this user.

[37] The administration portal 106 allows the administrator to create the events, upload the pre-selected official content, and view analytics based on the analytics data. [38] The client side 102 includes a portable electronic device 108 (which is a form of portable electronic apparatus) that allows the user to interact with the system 100. The device 108 allows the user to create combined videos, and to share the combined videos. The device 108 sends (or uploads) the combined videos to the server side 104. The device 108 receives event data representing the relevant events from the server side 104. The device 108 receives media data representing externally generated content (EGC) from the server side 104. The device 108 shares the combined videos by sending (or publishing) the combined videos or links (which may be universal resource locators, URLs) to the combined videos to other devices 110 or servers which may be associated with social network systems (which may include systems provided by Facebook Inc, Twitter Inc and/or Instagram Inc).

[39] The server side 104 includes a plurality of remote server systems, including one or more data servers 112 and one or more media content servers 114. The media content servers 114 provide the content management system (CMS).

[40] The data servers 112 may be cloud data servers (e.g. , provided by Amazon Inc) that send the data to, and receive the data from, the administration portal 106, and receive the data from, and send non-media content data to, the user device 108. The non-media data represent locations of the EGC data, and the transition data, which are stored in the media servers 114.

[41] The media servers 114, which may also be cloud servers, receive media data

(representing images and videos) including the EGC data, and any remote transition data and watermark data, from the data servers 112 for rapid sending (or provisioning) to the user device 108.

[42] On the client side 102, the administration portal 106 may be a Web client implemented in a standard personal computer, such as a commercially available desk-top or laptop computer.

[43] The user device 108 may include the hardware of a commercially available smartphone or tablet computer or laptop computer with Internet connectivity. The user device 108 includes a plurality of standard software modules, including an operating system (e.g. , iOS from Apple Inc., or Android OS from Google Inc). The herein-described methods executed and performed by the user device 108 are implemented in the form of machine-readable instructions of one or more software components or modules stored on non- volatile (e.g. , hard disk) computer-readable storage in the user device 108. The machine-readable instructions control the user device 108 using operating system commands. The user device 108 includes a data bus, random access memory (RAM), at least one electronic computer processor, and external computer interfaces. The external computer interfaces include user-interface devices, including output devices and input devices. The output devices include a digital display and audio speaker. The input devices include a touch- sensitive screen (e.g., capacitive or resistive), a microphone and at least one camera. The external interfaces include network interface connectors that connect the user device 108 to a data communications network (e.g. , a cellular telecommunications network) and the Internet.

[44] The boundaries between the modules and components (which may also be referred to as "classes" or "methods", e.g. , depending on which computer-language is used) are exemplary, and alternative embodiments may merge modules or impose an alternative decomposition of functionality of modules. For example, the modules discussed herein may be decomposed into submodules to be executed as multiple computer processes, and, optionally, on multiple processors in the user device 108. Moreover, alternative

embodiments may combine multiple instances of a particular module or submodule.

Furthermore, the operations may be combined or the functionality of the operations may be distributed in additional. Alternatively, such actions may be embodied in the structure of circuitry that implements such functionality, such as the micro-code of a complex instruction set computer (CISC), reduced instruction set computer (RISC), firmware programmed into programmable or erasable/programmable devices, the configuration of a field- programmable gate array (FPGA), the design of a gate array or full-custom application- specific integrated circuit (ASIC), or the like. [45] On the server side 104, the data servers 112 may be Amazon data servers and databases, and the media-data servers 114 may include the Amazon "S3" system that allows rapid download of large files to the user device 108.

[46] As shown in Figure 2, the data servers 112 store and make accessible: the analytics data; the event data; the event media data; and settings data. The settings data represent settings for operation of the system 100: the settings data may be controlled and accessed by the administrator through the administration portal 106.

[47] The media servers 114 store data representing the following: the EGC data, including EGC files, each with the pre-generated video and the pre-generated audio; the transition data; and the watermark data and the generated combined videos (generated by the user device 108). For example, these data can be stored in an MP4 format. The EGC data, the transition data and the watermark data may be uploaded to the media server 114 from the administration portal 106 via the data servers 112. For example, each of these data files may be uploaded by an administrator opening a web browser and navigating to an administrator portal, filling out a form, picking a video from an administrator computer, and clicking a submit button.

[48] The device 108 includes a device client 202. The device client 202 includes a communications module 204 that communicates with the server side 104 by sending and receiving communications data to and from the data servers 112, and receiving media data from the media servers 114. The device client 202 includes a generator module 206 that generates the combined videos. The device client 202 includes a user-interface (UI) module 208 that generates display data for display on the device 108 for the user, and receives user input data from the user-interface devices of the user device 108 to control the system 100.

[49] The device 108 includes preferences data representing the preferences of the device client 202, which may be user-specific preferences of the device client 202, which may be user-specific preferences, for example: location, social log-ins, phone unique identifier, previous combined videos, other profile data or social data that is available. [50] The device 108 includes computer-readable storage 210 that stores the UGC data, and that sends the UGC data to the generator module 206 for generating the combined videos. The device 108 includes a camera module 212 that provides an application programming interface (API) allowing the user to capture images or videos and store them in the UGC data in the storage 210. The camera module 212 is configured to allow the device client 202 to capture images and/or videos using the camera of the device 108.

[51] The device 108 includes a sharing module 214 that provides an API for the device client 202 to send the combined videos, or the references to the combined videos, to the safe networking systems.

[52] All of the modules in the device 108, including modules 204, 206 and 208 provide APIs for interfacing with them.

Combined Video

[53] As shown in Figure 3, the combined video 300 includes a plurality of video and audio components, which may be referred to as "tracks". The video components include a first pure component 302 and a last pure component 304. The first pure component 302 may be a pure-UGC component generated from the user-generated image or video, and the last pure component 304 may be a pure-EGC component generated from the pre-generated video, and the combined video 300 may be a "selfie-first" combined video. Alternatively, the first pure component 302 may be the pure-EGC component, and the second pure component 304 may be the pure-UGC component, and the combined video 300 may be a "selfie-last" combined video. Alternatively, the pure-UGC component may be preceded and followed by pure-EGC components, i.e., "bookended" by EGC. Alternatively, the pure-EGC component may be preceded and followed by pure-UGC components, i.e., "bookended" by UGC.

[54] The combined video 300 includes an audio component 306 generated from the pre-generated audio of the EGC data. The first component 302 is synchronized or overlayed with the EGC audio component 306, and the second component 304 is also synchronized or overlayed with the EGC audio component 306, so that the EGC audio plays while both the UGC video and the EGC video are shown. The pure-EGC component and the audio component 306 are synchronized as in the pre-generated video represented by the EGC data in the remote server.

[55] The combined video 300 may include an initial fade-in component 314, in which the video fades from black to display the first pure content 302. The combined video 300 may include a final fade-out component 316 during which the last pure component fades to black. The initial fade-in component 314 may be applied to the audio component 306 such that the volume fades in from zero. Similarly, the final fade-out component 316 may be applied to the audio component 306 so that the audio fades out to zero at the end of the combined video 300.

[56] The combined video 300 includes a transition component 308. The transition component 308 includes a cross-fade component 310 in which the first pure component 302 (which may be the pure-UGC component 302 or the pure-EGC component 304) fades out and the last pure component (which may be the pure-EGC component 304 or the pure- UGC component 302 respectively) fades in. The transition component 308 includes a transition display component 312 in which the transition image or video is displayed in the middle, or at the beginning, or at the end, or elsewhere in the transition component 308. During the transition component 308, the transition display component 312 may be a transparency behind which the first pure component 302 cross fades to the second pure component 304. The cross fade may be linear, as defined in the settings data, the preferences data, and/or the generator module 206. Alternatively, the cross fade may be a gradient-wipe transition based on gradients in the transition image. Alternatively, the cross fade may be a mask transition based on a mask in the transition image or video.

[57] Due to the fade-in component 314 and the transition component 308, a first component 318, based on the EGC or UGC data, is at least partially displayed for a greater duration than the first pure component 302. Similarly, due to the fade-out component 316 and the transition component 308, a last component 320, based on the other of the EGC or UGC data, is at least partially displayed for a greater duration than the last pure component 302.

[58] Each of the components is displayed for a pre- selected period of time (referred to as a duration) that is defined in the settings data and accessed by the generator module 206. The initial fade-in component 314 may have a duration of 0.2 seconds and the final fade-out component 316 may have a duration of 0.2 seconds. The first pure component 302 may have a duration of 5 seconds. The transition component 308 may have a duration of 1.5 seconds. The transition display component 312 may have a duration of 0.2 seconds or of 1.0 seconds. The second pure component 304 may have a duration of 7.5 seconds. The total first component 318 may have a total duration of 6.5 seconds. The total last component 320 may have a total duration of 9 seconds. The durations of the first and second components 318, 320 (and thus the durations of the first and second pure components 302, 304), and the duration of the transition component 308, may be selected based on the types of the components 318, 320, 308. The types may be the UGC component and the EGC component: the UGC component may be selected to have a duration of 5 seconds, and the EGC component may have a selected duration of 9 seconds, regardless of which is first. If the UGC data represent a user-generated image only (and not a user-generated video), the duration of the UGC component may be selected to be less than the duration if the UGC data represent a user-generated video.

[59] The UGC component may be generated from a user-generated image (which may be a photo) rather than a user-generated video. The UGC component may show the user-generated image as a static video, or a moving video that zooms and pans across the user-generated video (this may be referred to as a "Ken Burns effect"). The pan and zoom values for the transition may be defined in the setting data, the preferences data and/or the generator module 206. The zoom value may be from 1.0 to 1.4, where " 1.0" means not zoomed in or zoomed out (i.e., 100%), "2.0" means zoomed in to the point of not being able to display half of pixels in the image, "0.0" means zoomed out to where double the amount of pixels in the image are displayed (e.g. , the extra area would normally be rendered as black), and the values between 0.0 and 2.0 are related generally linearly to the fraction of displayed pixels. [60] For a user-generated video, the duration of the UGC component may be 5 seconds, whereas for a user-generated image, the duration of the UGC component may be 3 seconds. When the UGC data is determined to represent only the user-generated image, the total duration of content based on the UGC data may be less (3 seconds pure), and the total duration of the EGC component may be increased by the same amount (to 11 seconds pure) so the total duration of the combined video 300 is the same regardless of the type of the UGC data.

[61] The watermark may be applied over the first component 302 and/or the second component 304. Alternatively, application of the watermark may be determined based on the type of component (UGC or EGC) regardless of which is first.

Method 400

[62] The system 100 performs a method 400 of video generation including the following steps, which may be implemented in part using one or more processors executing machine-readable commands:

[63] the user device 108 accessing low-resolution images and/or descriptions

(which may be referred to as "thumbnails") of available EGC videos (which may be referred to as "clips") for display on the user device 108 (step 402);

[64] the user device 108 receiving user input to select one of the thumbnails, and to download (from the media servers 114) and play the clip in its totality (step 404);

[65] the user device 108 receiving user input to mark events as favourites, and record these markings in the preferences data and/or the settings data (step 406);

[66] the data servers 112 determining which events are popular when the user uploads the event to the system controlled through an admin rating system (-infinity to infinity) (step 408);

[67] the user device 108 generating display data for the display of the user device 108 to display simultaneously two pre-combined images or pre-combined videos from the UGC data and the EGC data (in different places on the screen) prior to generating the combined video, and allowing selection and previewing of both through pre-combined images/videos through the user interface of the user device 108 (which may include the user swiping left to right to select different UGC files or EGC files) (step 410);

[68] the user device 108 adding a user-generated video or photo while executing the method (which may be referred to as being "in the app") by accessing the camera module or the stored photos in the user device 108 using pre-existing modules in the operating system of the user device 108 (step 412);

[69] optionally, the user device 108 receiving a user input to select a transition instance, including a transition style and a transition duration, for the transition component 308;

[70] the user device 108 receiving a single user input (which may be a button press on the user interface) to initiate the generating step once the EGC and UGC clips / images have been selected in the user interface (step 414);

[71] the system 100 generating the combined video by the performing the generating method 500 described hereinafter (step 416); and

[72] the user device 108 accepting user input to log into one of the hereinbefore- mentioned social media systems using one of the APIs on the user device 108, and to share the generated combined video using the APIs on the user device 108 (e.g. , to Facebook, Twitter, Instagram, etc.), which may be by means of a reference to a location in the data servers 112 (e.g. , a Website provided by the system 100), or by means of a media file containing the combined data (step 418).

[73] The method 500 of generating the combined video may be performed at least in part by the generator module 206 which may perform (i.e., implement, execute or carry out), at least in examples, steps defined in objective C commands that are included hereinafter in the computer code Appendix A. In the computer code, the videos and images may be referred to as "assets". The modules may be referred to as "methods" or "classes". The combined video may be referred to as a "Kombie". [74] Generating the combined video following the method 500 thus may include the following steps:

[75] defining hard-coded duration values for the combined video 300 (see code lines 19-28); alternatively, the duration values may be access in the settings data (step 502), or determined automatically (e.g., from an analysis of the EGC file duration), or selected by the user using the user interface;

[76] allocating memory in the user device 108 for handling the assets (including the accessed and generated video and audio assets) used in the generating step (code lines 34 to 54) (step 504);

[77] initializing operation of the generator module 206 from the parent module in the user interface of the device client 202 (code lines 56 to 73) (step 506);

[78] setting the values of the variables used by the generator module 206 to 0 or nil, thus clearing the memory (code lines 74 to 97) (step 508);

[79] initializing a function to control a progress bar for display by the user interface module 208 showing progress of the generating step for the user (code lines 98 to 102) (step 510);

[80] accessing a dictionary in the preferences data which may be referred to as

"NS Dictionary", that is an array or list of file locations, including file paths (for local files on the user device 108), and remote locations for files in the media servers 114 (which may include universal resource locators, URLs), and filling the dictionary in the generator module 206 with the file locations in their preferences data (code lines 103 to 149) (step 512);

[81] fetching the assets from the remote storage or the local storage each in separate operational threads (code lines 147 to 148), including fetching the three audio visual (AV) assets using the three locations (which may be URLs) is commenced by one method for each set that run in parallel and on background threads of the operating system (see code lines 152 to 287) (step 514); [82] accessing and retrieving the user asset, which includes the user-generated image or video, and associated dimensions data and meta data associated with the user- generated image or video (code lines 168 to 208), including counting the number of video sets retrieved by the plurality of parallel threads (code lines 176, 219 and 247) and calling the combined video creation process once all the assets have been retrieved (code lines 179 to 185, 222 to 228, or 257 to 263) (step 516);

[83] if the UGC data represent a user-generated video, saving it to local memory in the user device 108 (code lines 196 to 200) (step 518);

[84] if the UGC data represent a user-generated image, calling a method to generate an AV asset from a local image object (code lines 201 to 204) (step 520);

[85] accessing and retrieving the pre-generated video from the remote server

(code lines 211 to 236) (step 522);

[86] accessing and retrieving the transition image or video from the media servers 114, or from the storage of the user device 108 (code lines 239 to 271) (step 524);

[87] in some embodiments (not in the Appendix A), if the transition data represent a transition image, calling a method to convert the transition image to an intermediate transition video, as described hereinafter (conversion method is in code lines 416 and 455) (step 526);

[88] in some embodiments, if the transition data represent a transition video, accessing and downloading the transition video from the determined location (code line 237) (step 528);

[89] after retrieving all AV assets in the background, calling the combination engine (code lines 302 to 342), including passing the created combined video back to the parent module (which may be referred to as a "class"), including passing an asset dictionary that includes the three AV assets (all videos, which may have been converted from images in the asset retrieval step) to the combination engine (step 530); [90] in some embodiments, retrieving one or more videos from the remote server, and writing them to a local file in memory, which may be called by the asset- retrieval method, and thus included in the asset-retrieval step (code lines 1263 to 1312), which includes accessing and retrieving the remote data based on allocation identifier for a location of the remote data, retrieving an AV asset from a local location, retrieving an AV asset from a remote image location (and converting to a video if necessary), and retrieving an AV asset from a local image location (see code lines 968 to 1112) (step 532);

[91] creating the combined video using the asset dictionary (code lines 344 to

966), including accessing the asset dictionary with the three assets, assigning the first video asset to local memory (code line 360), assigning the second video asset to local memory, where the first and second video assets can be pre-generated video and the user-generated video or an intermediate user-generated video that has been automatically generated based on the user-generated image (code line 361), and assigning the transition video, which may be the original transition video or an intermediate transition video automatically generated based on the transition image, to local memory (code line 362), and in embodiments assigning a fourth AV asset, including only the audio from the pre-generated video in the EGC data, to local memory (code line 363)— in alternative embodiments, the audio asset may be accessed from a separate location defined by the dictionary rather than being extracted from the pre-generated video (code lines 366 and 369, which are commented out) (step 534);

[92] creating a digital composition object to hold the assets during the combination process (line 386) (step 536);

[93] generating a first video component by adding only video of the first video asset (lines 394 to 399) (step 538);

[94] setting the first track time range to start at the finish of the first video asset

(code lines 402 to 403) (step 540);

[95] creating the second video component from the second video asset to include only video and no audio (code lines 407 to 412) (step 542): [96] setting the second time track range as start to finish of the second video asset: i.e., using the entire track range of the second video asset (EGC) as a marker for how long the created combined video will be, allowing the method to operate if a file conversation mishap occurs, e.g., if the duration of the EGC gets shortened from 14 seconds to 13 seconds when encoded/decoding/transferring between servers (code lines 415 to 417) (step 544);

[97] creating an audio track from the first video asset (code lines 421 to 431)

(step 546);

[98] creating a main composition instruction to hold instructions containing the video tracks, in which the layer instructions denote how and when to present to video tracks (code lines 437 to 493) (step 548);

[99] in embodiments, if an image was used to create the UGC data, applying the

Ken Burns effect, or a different effect based on a selected theme setting, to transform the appropriate video asset (code lines 458 to 469) (step 550);

[100] creating a main video composition to hold the main instruction including setting the video dimensions (code lines 499 to 507) (step 552);

[101] creating core animation layers for the transition image asset to be applied, including creating animation instructions to fade the transition image asset in and out, and applying the core animation layers to the main video composition (code lines 515 to 573) (step 554);

[102] in embodiments, combining the pre-generated video with the user-generated image or video without the transition image or video, by appending two video sets and one audio asset to an AV track (step 556);

[103] preparing and exporting the main video composition, including using a temporary file path (code lines 577 to 659) (step 558);

[104] in embodiments, setting fade-in durations and fade-out durations for the three tracks, including the fade-in and fade-out durations pre-set in the settings data, which may be performed by adjusting the opacity of the video tracks from 0 to 1 , (for fading in) and from 1 to 0 (for fading out) (step 560);

[105] setting the size for all of the video sets and components in the combined video to be the same (code lines 504 to 506) (step 562);

[106] setting a common frame rate for all of the video components (code line 507)

(step 564);

[107] in some embodiments, adding a watermark to one of the components (step

566);

[108] in some embodiments, pulling the audio out of the original EGC data, and inserting it as a track for the entire duration (step 568); and

[109] saving and exporting the created combined video to file, including to a combined album in the computer-readable memory of the user device 108 (step 570).

[110] In some instances, the step of creating the intermediate transition video from the transition image, or the intermediate user-generated video from the user-generated image may include converting the static images into a video file using routines from the AV Foundation framework from Apple Inc. This includes ensuring the image corresponds to a pre-defined size in the settings data, e.g., 320 by 320 pixels (code lines 1143 to 1144). A buffer is created and filled with pixels to create the video by repeatedly adding the image to the buffer (lines 1166 to 1208) including grabbing each image and appending it to the video until the maximum duration of the intermediate video is reached, and each image is displayed for a pre-selected duration, e.g., one second (code lines 1185 to 1186). The intermediate video creation process finishes by returning a location (e.g. , a URL) of the created file which is stored in temporary memory of the user device 108.

[Ill] The dictionary, referred to as "NS dictionary" in the code, includes image data and metadata used by a video writer, e.g., from the AV Foundation framework. The video settings may be passed to video creation sub-routines using the dictionary. [112] In some instances, instead of generating and appending the video assets (i.e. , the first video asset, the second video asset, the transition asset, and audio track) in steps 536 to 558 of method 500, the generator module 206 assembles the combined video frame- by-frame. Each frame is selected from one of the data sources comprising the UGC data, the EGC data, or the transition data. The generator module 206 determines which data source to use for each frame based on a theme setting in preferences data. The theme setting includes data accessed by the generator module 206 for each frame as the combined video is assembled. Each frame can include a UGC frame from the UGC data, an EGC frame from the EGC data, a transition frame from the transition data, or a blend frame that includes a blend of the UGC, EGC and/or transition data. One of a plurality of blending methods, which is used to generate the blend frame, can be selected based on the theme setting. An example theme is a "cross-fade with mask" theme, in which an initial frame is purely from one UGC/EGC data source, a final frame is purely from the other UGC/EGC data source, and the intermediate frames incorporate increasing pixels from the other source in a cross-fade transition, and during the transition, a selected mask of pixels is applied to a series of the frames. Example computer code implementing the "cross-fade with mask" theme is included in Appendix B.

[113] The combined audio track is by default the EGC audio track. In embodiments, the UGC audio is mixed into the combined audio track. Adding the audio track is implemented separately from the frame-by-frame assembly process. Once the frame-by- frame assembly is completed, the generator module 206 adds the EGC audio track to the video, e.g. , using processes defined in the AV Foundation framework.

[114] In some instances, the generated combined video can be generated in less than 2 seconds on older commercially available devices, and in even less time on newer devices. The user interface may include a screen transition during this generation process, and there may therefore be no substantial noticeable delay by the user of the generation of the combined video before it can be viewed using the device 108.

[115] In some instances, the combined video is transcoded from its raw combined format into a different sharing format for sharing to the devices 110 or the servers associated with social network systems. The transcoding process is an intensive task for central processing unit (CPU) and input-output components of the device 108. For example, using the AV Foundation and Core Image processing modules, the transcoding may take 12 seconds on an Apple iPhone 4s, or 2.5 seconds on an iPhone 6. The transcoding process is initiated when viewing of the combined video is commenced, thus, for a typical combined video length of 14 seconds, the transcoded file or files are ready for sharing before viewing of the combined video is finished.

Alternatives

[116] In some instances, the system 100 can use locally generated EGC, i.e., "local" EGC generated on the client side 102, including local EGC captured (using the camera) and stored in the device 108. In these instances, the EGC is user-generated in the same way as the UGC, and thus the EGC is not "external" to the device 108, although the combined video generation process still uses the local EGC in the same way as it uses the external EGC. In these instances, the device 108 is configured to access the local EGC content (the photo or the video) on the portable electronic device itself (i.e., the EGC data is stored in the device 108), rather than accessing the EGC from the server side 104. This can be selected by the user using the user interface, e.g., using a menu system or folder system or thumbnail system to select EGC on the server or pre-generated videos / photos (also referred to as local "EGC" in this context) on the portable electronic device 108 itself. In these instances, the user device 108 can display available pre-recorded images and videos in the device 108 in step 402. Once selected in the processing method 400, the locally sourced EGC is subsequently treated in the same way as the externally sourced EGC.

[117] In some instances, an instance of the transition component 308 is selected by the user through the user interface after the EGC and the UGC have been selected. Thus the method 400 includes a step of the device 108 receiving user instructions, via the user interface, to select a style and a duration of the transition instance. Available pre-defined transition styles, and available transition durations, are made available through the user interface, and the user can select a style and a duration for the instance of the transition component 308 to be inserted in between the EGC and the UGC.

[118] In some instances, the duration for an instance of the combined video 300 can be determined from the pre-existing of the EGC video that is selected for that instance, rather than being pre- set for all instances. The combined-video duration can be equal to the EGC duration, or can be equal to the EGC duration plus a pre-selected or user-selected time for the other components, including the fade-in component 314 (can be pre-selected), the fade-out component 316 (can be pre-selected), the transition component 308 (can be user- selected), and/or the UGC component 318 (can be user-selected). The duration of the EGC can be determined from a duration value represented in metadata associated with the EGC file, or using a duration-identification step on the server side 104 (e.g., in the media content servers 114) or on the client side 102 (e.g., in the user device 108), e.g., using a duration-identification tool in the AVFoundation framework.

[119] In some instances, the combined video 300 can include a plurality of transitions, and a plurality of instances of UGC components and/or EGC components. For example, the selected EGC can define the duration of the combined video instance, the user can select a plurality of UGC components (e.g., by recording a plurality of selfie videos), the user can select a transition instance at the start and/or end of each UGC component, and the combined video can be generated from these components.

[120] In some instances, the audio component 306 of the combined video 300 is generated from the audio of the UGC data. The first component 302 is synchronized or overlayed with the UGC audio component, and the second component 304 is also synchronized or overlayed with the UGC audio component, so that the UGC audio plays while both the UGC video and the EGC video are shown. The pure-UGC component and the audio component 306 are synchronized as in the original UGC video. INTERPRETATION

[121] The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.

[122] Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention as hereinbefore described with reference to the accompanying drawings.

APPENDIX A

This appendix includes details of a portion of an implementation using objective C.