Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEMS AND METHODS FOR PROCESSING IMAGE DATA TO COINCIDE IN A POINT OF TIME WITH AUDIO DATA
Document Type and Number:
WIPO Patent Application WO/2021/212207
Kind Code:
A1
Abstract:
Systems and methods for processing image data to coincide in a point of time with audio data to create a master timeline coordinating audio timeline playback and image data display to generate a synchronized multimedia presentation using content creator and content player modules. The system comprises a network architecture, a synchronization system and a file system database. The method comprises: accessing a content creator module of a synchronization system; manipulating one or more aud io data to assemble an audio timeline; manipulating one or more image data to display transformations made as a result of editing; creating one or more timestamps on the audio timeline; assigning each image data to a timestamp corresponding to a time value of a play duration of the audio timeline; generating a master timeline; generating a digital control file; and reproducing the synchronized multimedia presentation in a content player module.

Inventors:
MONTEIRO SIQUEIRA FRANCESCHI WILTER (CA)
DA SILVA FIGUEIREDO VANESSA (CA)
Application Number:
PCT/CA2021/050187
Publication Date:
October 28, 2021
Filing Date:
February 19, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MONTEIRO SIQUEIRA FRANCESCHI WILTER (CA)
DA SILVA FIGUEIREDO VANESSA (CA)
International Classes:
H04N21/8547; G06F16/40; G11B27/10; H04L12/16; H04N21/4725; H04N21/81; H04N21/84; H04N21/8545
Foreign References:
US20100281381A12010-11-04
US10560502B22020-02-11
US20180270446A12018-09-20
US20160328105A12016-11-10
US20110123972A12011-05-26
US20170229152A12017-08-10
US10582277B22020-03-03
US7512886B12009-03-31
Download PDF:
Claims:
CLAIMS

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:

1 . A method for processing image data to coincide in a point of time with audio data to create a master timeline coordinating audio timeline playback and image data display to generate a synchronized multimedia presentation using a content creator module, the method comprising: manipulating audio data to assemble an audio timeline, wherein the audio timeline contains timestamps to indicate a position of time wherein each image data is displayed to coincide in time with audio playback recorded in the audio timeline; manipulating image data to display transformations made as a result of editing and assigning each image data to a timestamp corresponding to a time value of a play duration of the audio timeline; creating a master timeline comprising manipulated image data and the audio timeline and establishing a time relationship between image data and the audio timeline to generate a synchronized multimedia presentation; generating a digital control file wherein records and information pertaining to manipulations of image data and the audio timeline in the master timeline are stored to maintain the synchronized time relationship between image data and the audio timeline; storing the master timeline in a file system database; processing the digital control file to reproduce the synchronized multimedia presentation and maintaining the integrity of each image data and audio data in the master timeline.

2. The method of claim 1 further comprising a local server storing and live transmitting the synchronized multimedia presentation.

3. The method of claim 2 further comprising configuring the synchronized multimedia presentation and transmitting and receiving one or more synchronized multimedia presentations in a computing device over a computer network, allowing playback to start while the rest of the data is still being received while in a live transmission, wherein a transmission and reception delay of the one or more synchronized multimedia presentations are configured.

4. The method of claim 1 further comprising creating, listing and manipulating audio data to assemble the audio timeline.

5. The method of claim 4, wherein using an apparatus to capture audio and transferring to the content creator module to create audio data, transferring audio data from a computing device to the content creator module to list audio data, and arranging the one or more audio data into a sequential order, equalizing sounds of audio data, adding and/or removing parts of audio data, adding and/or removing sound effects, and reproducing comprises playing, forwarding, backwarding, pausing, stopping, adjusting sound volume, varying playback speed to transform audio data.

6. The method of claim 1 , wherein arranging a sequential order of audio data to assemble an audio timeline and connecting the audio timeline to the master timeline containing timestamps assigned to each image data to coincide with time values of a play duration of audio data thus creating a synchronized multimedia presentation.

7. The method of claim 1 , wherein marking a time value of a play duration of audio data to generate one or more timestamps wherein each image data is assigned to a corresponding time value of a play duration of audio data.

8. The method of claim 1 further comprising creating, listing and transforming image data to be assigned the one or more timestamps to coincide with a time value of a play duration of the audio timeline.

9. The method of claim 8, wherein using an apparatus to capture images and transfer to the content creator module to create image data, transferring image data from a computing device to the content creator module to list image data, and resizing, cropping, color manipulating, rotating, including layers with graphical elements, tridimensional manipulations, animations, zooming, sharpening, enhancing, removing blemish, adding tone effect, reversing image, reversing exposure, adding, deleting, and modifying one or more graphical image elements to manipulate image data.

10. The method of claim 1 further comprising connecting the image data to the audio timeline marked with timestamps corresponding to the point of time wherein each image data is displayed to coincide in a point of time with a time value of a play duration of the audio timeline, thus generating the master timeline.

11. The method of claim 1 further comprising using timestamps to establish a time relationship between the one or more image data and the audio timeline to generate the digital control file comprising a text file standard containing records and information of one or more manipulations processed in audio data and image data, maintaining the integrity of audio data and image data and storing in the file system database.

12. The method of claim 11 , wherein processing the digital control file to execute the information and records to generate the synchronized multimedia presentation.

13. The method of claim 1 further comprising the file system database storing manipulations changed as a result of editing and including textual elements in the synchronized multimedia presentation.

14. The method of claim 1 further comprising including meta-elements describing characteristics of the synchronized multimedia presentation, wherein meta-elements are stored in the file system database.

15. The method of claim 14, wherein describing records and information stored in the digital control file and audio to text transcription text files, meta-tags, titles, author and date to generate meta-elements, wherein the synchronized multimedia presentation is discoverable by search engines.

16. The method of claim 1 further comprising enabling audio to text transcriptions of the audio timeline, wherein generating audio to text transcriptions of the audio timeline is manual or using machine learning techniques generated by an application programming interface and storing audio to text transcriptions in the file system database.

17. The method of claim 1 , further comprising retrieving the digital control file from the file system database, thus enabling the master timeline to be loaded.

18. The method of claim 1, further comprising activating a master timeline playback, thus enabling the master timeline playback to be reproduced.

19. The method of claim 1 , further comprising enabling the display of audio to text transcriptions during the time the master timeline playback is reproduced.

20. The method of claim 1, further comprising generating one or more digital records generated by content player users to mark the master timeline, distributing one or more synchronized multimedia presentations to one or more users over computer networks, enabling the display of audio to text transcriptions, generating user events for the synchronized multimedia presentation, storing one or more content player user events and enabling the exhibition of meta-elements.

21. The method of claim 20, wherein the one or more digital records may be bookmarks.

22. The method of claim 20, wherein generating user events may be including and recording annotations, links, meta-elements, image data and audio data.

23. The method of claim 20, wherein the one or more content player user events are created by generating user events and configured to store records and information generated by a content player user for a synchronized multimedia presentation.

24. A synchronization system for processing image data to coincide in a point of time with audio data to create a master timeline coordinating audio timeline playback and image display to generate a synchronized multimedia presentation, the system comprising: a content creator module to create a master timeline comprising image data and audio data wherein each image data is assigned a timestamp corresponding to a time value of a play duration of an audio timeline to generate a digital control file containing synchronizing information to execute the master timeline and reproduce a synchronized multimedia presentation; a synchronization section to process and execute the digital control file containing information and records to execute an audio and image data time relationship using timestamps registered in the master timeline; a content player module to process the digital control file and reproduce the synchronized multimedia presentation according to records and information pertaining to the master timeline and enable users to manipulate the synchronized multimedia presentation while viewing thereon; and a file system database configured to store data pertaining to the digital control file and one or more content player user events.

25. The system of claim 24 further comprises an audio data section, an image data section, a master timeline section and a synchronization section.

26. The system of claim 25, wherein the audio data section comprises an audio data creation, an audio data list and an audio data manipulation and configured to provide an interface for creating, listing and manipulating one or more audio data.

27. The system of claim 25, wherein the image data section comprises an image data creation, an image information list and an image information manipulation and configured to provide an interface for creating, listing and manipulating one or more image information.

28. The system of claim 25, wherein the master timeline section comprises a timestamp creation and an audio timeline and an image data time relationship generator and configured to provide an interface for creating timestamps in the master timeline and permitting the synchronization of image data with the audio timeline.

29. The system of claim 25, wherein the synchronization section comprises a master timeline information compilation, an audio timeline and image data time relationship execution and a digital control file generator and configured to process and execute the digital control file containing commands to execute the synchronization of one or more image data with the audio timeline.

30. The system of claim 24 further comprises a synchronization processing section, a playback section and a user interface section.

31. The system of claim 30, wherein the synchronization processing section comprises a digital control file processor and a master timeline loader and configured to provide an interface for processing the digital control file and loading the master timeline.

32. The system of claim 30, wherein the playback section is configured for reproducing a synchronized multimedia presentation wherein the master timeline playback reproduces the audio timeline synchronized with one or more image data.

33. The system of claim 30, wherein the user interface section comprises a synchronized master timeline playback viewer, a content player user events and a log content player user events and configured to create, store and distribute one or more content player user events, wherein the content player user events are to store and distribute preferred parts of the master timeline to one or more users and enable audio to text transcriptions.

34. The system of claim 24 further comprises a transmission and reception of the synchronized multimedia presentation using a computing device to transmit and receive over a computer network, allowing playback to start while the rest of the data is still being received while in a live transmission.

35. The system of claim 24 further comprises a synchronized multimedia presentation viewer, a synchronized multimedia presentation sections area and a content player user events area.

36. The system of claim 35, wherein the synchronized multimedia presentation viewer comprises an image information viewer area and a master timeline playback area.

37. The system of claim 35, wherein the synchronized presentation sections area comprises one or more sections wherein a content player user selects a preferred point in the master timeline to view a synchronized multimedia presentation.

38. The system of claim 35, wherein the content player user events area comprises one or more digital records generated by content player users to mark the master timeline, web applications to distribute content over computer networks, an audio to text transcription module, an add user events section and a log user events section and configured to be stored in the file system database.

39. The system of claim 35, wherein the one or more digital records may be bookmarks on the master timeline.

40. The system of claim 35, wherein the web applications to distribute content over computer networks may be social networking web applications.

41. The system of claim 35, wherein the audio to text transcription module enables the audio to text transcript to be displayed in a master timeline.

42. The system of claim 35, wherein the add user events section is configured to include and record information created by a content player user for a synchronized multimedia presentation.

43. The system of claim 35, wherein the information may be annotations, links, meta-elements, image data and audio data in accordance with one embodiment of the invention.

44. The system of claim 35, wherein the log content player user events section is configured to store records and information pertaining to user events and generated by a content player user for a synchronized multimedia presentation.

Description:
SYSTEMS AND METHODS FOR PROCESSING IMAGE DATA TO COINCIDE IN A POINT OF TIME WITH

AUDIO DATA

TECHNICAL FIELD

This application relates to the field of software engineering. More particularly, the present disclosure relates to implementing methods and systems to synchronize unconnected image data and audio data.

BACKGROUND

Since the advent of the Internet, a number of multimedia platforms and software have been developed to support the creation, consumption and distribution of digital content. Many individuals have created presentations, podcasts and digital tutorial sessions using audio and video formats. Nevertheless, individuals producing, transmitting and distributing digital audio content often encounter limitations to include multimedia elements (e.g., images, annotations and hyperlinks) to complement the content existing in digital audio data.

Existing technology supports the inclusion of presentation slides or other kinds of graphical data to coincide in a point of time with audio data and video data to generate synchronized presentations. Nevertheless, the methods and systems employed to process the inclusion of such data produce video data by merging audio and graphical data. As a result, the audio and graphical data become a single file in a video format (e.g. .mp4). If an individual need to edit parts of that single file, it will be necessary to upload the data that generated such a file again. Furthermore, the existing technology does not support the synchronization of audio data and image data during live transmissions occurring on the Internet. For example, an individual hosting a podcast wants to include an image to illustrate what it is being presented during the podcast. Such an image cannot be added to coincide in a point of time with the audio playback using the existing technology unless the individual uses a video camera to record a video.

The present disclosure concerns implementing systems and methods for processing image data to coincide in a point of time with audio data, maintaining the integrity of both image and audio data. For example, a tutor teaching online lessons wants to use an image to support the learning of their pupils while narrating the characteristics of such an image. The tutor uploads the supporting image and manipulates the image to be displayed during the online lessons. The tutor chooses to zoom in the image and display the details, thus supporting the narration describing the characteristics of the image. The tutor generates a timestamp and assigns this timestamp to the image. Consequently, the image will be displayed in a point of time coinciding with the audio timeline playback in a synchronized format. Rather than merging the image with the audio, the present disclosure results in a synchronized format establishing a timed connection between the image and audio data.

In a different scenario, a social media influencer expert on reviews and tutorials concerning makeup. The social media influencer takes pictures of her face wearing makeup and adds links and descriptions overlaying those pictures by using a number of manipulations existing in the present disclosure. The social media influencer’s audience will be able to save in the form of bookmarks the content existing on the images. SUMMARY

The present disclosure concerns implementing systems and methods for processing image data to coincide in a point of time with audio data to create a master timeline coordinating audio timeline playback and image data display to generate a synchronized multimedia presentation using content creator and content player modules. The system comprises a network architecture, a synchronization system and a file system database. The network architecture comprises one or more computing devices and computer networks connected to the synchronization system. The synchronization system comprises the content creator module, the content player module and the file system database. The file system database is configured to store data pertaining to a master timeline, a digital control file and one or more content player user events.

In accordance with an embodiment of the invention, the method comprising: accessing a content creator module of a synchronization system; manipulating one or more audio data to assemble an audio timeline containing a sequential order of one or more audio data; manipulating one or more image data to display transformations made as a result of editing; creating one or more timestamps on the audio timeline; assigning each image data to a timestamp corresponding to a time value of a play duration of the audio timeline; generating a master timeline containing manipulated image data and the audio timeline; generating a digital control file containing records and information pertaining to manipulations of image data and the audio timeline in the master timeline; including meta-elements describing characteristics of a synchronized multimedia presentation; storing the master timeline in the form of a digital control file in a file system database; making a request to the file system database; processing the digital control file containing records and information of a synchronization of image data with an audio timeline; loading a master timeline containing a synchronized multimedia presentation; activating the master timeline playback; and reproducing the synchronized multimedia presentation in the content player module.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 is an illustration of a computer network, one or more computing devices and a synchronization system in accordance with an embodiment of the invention.

FIG. 2 is an illustration of a synchronization system in accordance with an embodiment of the invention.

FIG. 3 is an illustration of a content creator module system in accordance with an embodiment of the invention.

FIG. 4 is an illustration of a content player module system in accordance with an embodiment of the invention.

FIG. 5 is an illustration of a user interface for a content creator module system in accordance with an embodiment of the invention.

FIG. 6 is an illustration of a user interface for a content player module system in accordance with an embodiment of the invention.

FIGS. 7A-7B (collectively referred to as “FIG.7”) provide a flow diagram of an illustrative method for processing image data to coincide in a point of time with audio data in accordance with an embodiment of the invention.

FIG.8 provides a flow diagram of an illustrating method for reproducing a master timeline synchronizing image data with audio data in accordance with an embodiment of the invention. FIG. 9 is an illustration of a master timeline playback illustrating image data processed to coincide in a point of time with an audio timeline in accordance with an embodiment of the invention.

DETAILED DESCRIPTION

It will be readily understood that the components of the embodiments as generally described herein and illustrated in the appended figures could be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of various embodiments, as represented in the figures, is not intended to limit the scope of the present disclosure but is merely representative of various embodiments. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.

The described embodiments are to be considered in all respects only as illustrative and not restrictive. The scope of the invention is, therefore, indicated by the appended claims rather than by this detailed description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Reference throughout this specification to features, advantages, or similar language does not imply that all of the features and advantages that may be realized with the present invention should be or are in any single embodiment. Rather, language referring to the features and advantages is understood to mean that a specific feature, advantage, or characteristic described in connection with an embodiment is included in at least one embodiment. Thus, discussions of the features and advantages, and similar language, throughout this disclosure may, but do not necessarily, refer to the same embodiment.

Furthermore, the described features, advantages, and characteristics of the present disclosure may be combined in any suitable manner in one or more embodiments. One skilled in the relevant art will recognize, in light of the description herein, that the invention can be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the invention.

Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the indicated embodiment is included in at least one embodiment. Thus, the phrases “in one embodiment," “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.

Systems for processing image data to coincide in a point of time with audio data to create a master timeline coordinating audio timeline playback and image data display to generate a synchronized multimedia presentation using content creator and content player modules comprises a network architecture, a synchronization system and a file system database.

A network architecture 100 for producing, transmitting, receiving and reproducing synchronized multimedia presentations is illustrated in FIG.1. As used herein, synchronized multimedia presentations refer to one or more image data arranged to coincide in a point of time with parts of audio data containing play duration times. The one or more image data may be photographic image files and computer graphic images in accordance with an embodiment of the invention. The one or more audio data may be relating to files reproducing recorded sounds in accordance with an embodiment of the invention. In an embodiment of the invention, the network architecture 100 comprises one or more computing devices 101 and computer networks 102 connected to a synchronization system 103 as illustrated in FIG. 1. The computing devices 101 may be any electronic device for storing and processing data comprising at least a screen and enabled to connect to computer networks 102. In an embodiment of the invention, the computer network 102 may be the Internet and other networks that connect to the Internet.

As illustrated in FIG. 2, the synchronization system 103 comprises a content creator module 201, a content player module 202 and a file system database 203 in accordance with an embodiment of the invention. The synchronization system 103 is configured to permit the creation, transmission, reception, storage, preview and playback of synchronized multimedia presentations. Each of the content modules 202- 203 comprise a processor 204 to execute commands and operations generated in each module 202-203 in accordance with an embodiment of the invention.

Users accessing the synchronization system 103 may be content creator users and content player users in accordance with an embodiment of the invention.

The content creator module 201 operates to permit content creator users using a computing device 101 to create, transmit, receive, save, preview and reproduce synchronized multimedia presentations as illustrated in FIG. 3 in accordance with an embodiment of the invention. A content creator user may create, list and manipulate synchronized multimedia presentations using one or more image data and audio data in accordance with an embodiment of the invention.

As shown in FIG.3, the content creator module 201 comprises an audio data section 301, an image data section 306, a master timeline section 310, a synchronization section 313 and a file system database 203 in accordance with an embodiment of the invention. In another embodiment of the invention, the content creator module may comprise a live transmission section 317 and an audio to text transcription section 323.

The audio data section 301 comprises an audio data creation 302, an audio data list 303, an audio data manipulation 304 and an audio data timeline creation 305. The audio data section 301 is configured to provide an interface to create, list and manipulate one or more audio data.

The image data section 306 comprises an image data creation 307, an image data list 308 and an image data manipulation 309. The image data section 306 is configured to provide an interface to create, list and manipulate one or more image data.

The master timeline section 310 comprises a timestamp creator 311 and an audio timeline and image data time relationship generator 312. The master timeline section 310 is configured to provide an interface to create timestamps in an audio timeline and process a synchronization of one or more image data with the audio timeline. As used herein, timestamps refer to digital records of time to indicate a play duration time of audio data wherein image data is displayed to coincide in time with a time assigned in an audio timeline

The synchronization section 313 comprises a master timeline information compilation 314, an audio timeline and image data time relationship execution 315, and a digital control file generator 316. The synchronization section 313 is configured to process and execute a digital control file containing commands to execute a synchronization of one or more image data with the audio timeline.

In one embodiment of the invention, a content creator user may activate a live transmission 317 of one or more synchronized multimedia presentations as shown in FIG.3. If the content creator user chooses to activate a live transmission 317, a live transmission section 318 starts to operate. The live transmission section 318 comprises a live transmission activation 319, a transmission delay time 320, a live transmission operation 312. The live transmission activation 319 is configured to start live transmissions at the same time that the content creator user creates the one or more synchronized multimedia presentations. The transmission delay time 320 is configured to delay the time wherein one or more synchronized multimedia presentations are transmitted to one or more content player users. The live transmission operation 312 is configured to permit live transmissions and reception of one or more synchronized multimedia presentations using a computing device 101 over a computer network 102, allowing playback to start while the rest of the data is still being received.

In one embodiment of the invention, a content creator user may activate an audio to text transcription 322 based on the content of an audio timeline as shown in FIG. 3. If the content creator user chooses to activate an audio to text transcription 322, an audio to text transcription section 323 starts to operate. The audio to text transcription section 323 comprises a manual audio to text transcription upload 324, an automated audio to text transcription activation 325, and an audio to text transcription display 325. The manual audio to text transcription upload 324 is configured to permit transferring audio to text transcription files from a computing device 101 to the content creator module 201. The automated audio to text transcription activation 325 is configured to generate automatic audio to text transcriptions of an audio timeline using machine learning techniques generated by an application programming interface. The audio to text transcription display 326 is configured to enable the display of audio to text transcriptions at the same time that a synchronized multimedia presentation is reproduced.

As shown in FIG. 4, the content player module 202 comprises a synchronization section 323, a synchronization section processing section 401, a playback section 404 and a user interface section 406 in accordance with one embodiment of the invention.

The synchronization processing section 401 comprises a digital control file processor 402 and a master timeline loader 403. The synchronization processing section 401 is configured to retrieve the digital control file from the synchronization section 323 and provide an interface to process the digital control file 402 and load the master timeline 403.

The playback section 404 is configured to reproduce a master timeline playback 405 guiding a synchronized multimedia presentation wherein one or more image data are configured to coincide in a point of time with parts of an audio timeline.

The user interface section 406 comprises a synchronized master timeline playback viewer 407, a content player user events 408 and a log content player user events 409. The synchronized master timeline playback viewer 407 is configured to play, forward, backward, pause, stop, adjust sound volume, vary playback speed of the master timeline and enable content player user events in accordance with an embodiment of the invention.

The content player user events 408 is configured to create, store and distribute one or more content player user events. Content player user events may be to store and distribute preferred parts of a master timeline, create marks on the master timeline, add one or more image data to the master timeline and enable audio to text transcriptions in accordance with an embodiment of the invention. The log content player user events 408 is configured to load and store one or more content player user events in the file system database 203. An exemplary content creator interface 501 for creating a synchronized multimedia presentation according to an embodiment of the invention is illustrated in FIG 5. In one embodiment, the content creator interface 501 is provided as a computer program to computing devices. Nevertheless, in other embodiments, the content creator interface 501 may be provided as a webpage by a webpage provider to computing devices accessing a webpage processing the synchronization system 103.

The content creator interface 501 comprises a live transmission module 502, a list of images section 503, an add image data section 505, a manipulation tools section 506, a current image section 507, an add audio data section 508, an audio manipulation tools section 509, an audio timeline assembling section 510, a timestamp section 514, an image timestamp assignment section 517, and a master timeline section 518 and an audio to text transcription module 521 in accordance with an embodiment of the invention.

The list of images section 503, wherein one or more images 504 created, listed and manipulated are displayed once the content creator user captures, transfers from a computing device or manipulates the one or more image data using the add image data section 505. The current image section 507 displays an image data being manipulated by the content creator user. The manipulation tools section 506 comprises one or more manipulations a content creator user generates for an image data. The one or more manipulations may be resize, crop, color manipulation, rotate, include layers with graphical elements, tridimensional manipulations, animations, zoom, sharpen, enhance, remove blemish, add tone effect, reverse image, reverse exposure, add, delete, and modify one or more graphical image elements in accordance with an embodiment of the invention. The assign timestamp to image data section 508 enables the assignment of timestamps containing the play duration time of audio data wherein image data is displayed to coincide in time with a time assigned in an audio timeline.

In the embodiment shown in FIG.5, the add audio data section 508 comprises creating and/or listing one or more audio data, wherein a content creator user captures, transfers from a computing device and/or manipulates the one or more audio data in accordance with an embodiment of the invention.

The audio manipulation tools section 509 comprise tools to equalize sounds of audio data, add and/or remove parts of audio data, add and/or remove sound effects of audio data in accordance with an embodiment of the invention. The audio timeline assembling section 510 is configured to list one or more audio data 511 in a sequential order and assemble 512 the one or more data 511 into an audio timeline 513.

The timestamp section is configured to add timestamps 515 corresponding to time durations in a playback of the audio timeline 513. The image timestamp assignment section 517 is configured to assign one or more timestamps 515 to each of the image data 507 corresponding to a time duration in a playback of the audio timeline 513. The master timeline section 518 is configured to generate 519 and reproduce a master timeline 520 comprising the time synchronization of image data with audio data in accordance with an embodiment of the invention.

As shown in FIG. 5, the live transmission area 502 comprises a transmission and reception of the synchronized multimedia presentation using a computing device over a computer network, allowing playback to start while the rest of the data is still being received while in a live transmission in accordance with one embodiment of the invention. The audio to text transcription area 521 comprises an area to transfer an audio to text transcription generated manually and an area to enable an audio to text transcription generated automatically. The automatic audio to text transcription is generated by an application programming interface. The content creator interface 501 illustrated in FIG. 5 represents one of many possibilities to arrange the content creator interface. Nevertheless, in other embodiments, the sections represented in the content creator interface 501 may be arranged in different configurations.

In one embodiment of the invention, a synchronized multimedia presentation may be used by a content creator user generating audio data to be available over computer networks to one or more individuals (e.g., podcasts). In another embodiment of the invention, the synchronized multimedia presentations may be used by a content creator teaching electronic lessons to one or more individuals in person and/or over computer networks. In another embodiment of the invention, the synchronized multimedia presentation may be used by a content creator generating content to be distributed over one or more web applications.

A content player module 202 may operate to permit content player users operating a computing device 101 to search synchronized multimedia presentations, navigate synchronized multimedia presentations, add content player user events and reproduce synchronized multimedia presentations in accordance with an embodiment of the invention. In another embodiment of the invention, the content player module 202 is configured to permit content player users to receive live transmissions of synchronized multimedia presentations created by content creator users.

An exemplary content player interface 601 for reproducing a synchronized multimedia presentation according to one embodiment of the invention is illustrated in FIG. 6. In one embodiment, the content player interface 601 is provided as a computer program to computing devices. Nevertheless, in other embodiments, the content creator interface may be provided as a webpage by a webpage provider to computing devices accessing the webpage.

As shown in FIG. 6, the content player interface comprises a content player user events area 602, a synchronized multimedia presentation viewer 608, and a synchronized presentation sections area 610.

The content player user events area 602 comprises one or more digital records 603 generated by content player users to mark a master timeline, web applications to distribute content over computer networks 604 to one or more content player users, an audio to text transcription module 605, an add content player user events section 606 and a log content player user events section 607 in accordance with one embodiment of the invention.

The one or more digital records 603 comprise sections in the master timeline wherein the content player user creates marks on one or more preferred sections of a master timeline (e.g., bookmarks). The web applications distributing content over computer networks 604 comprise one or more applications to distribute preferred master timelines and sections in master timelines to other users over a computer network. The audio to text transcription module 605 comprises a section to enable the audio to text transcript to be displayed in a master timeline. The add content player user events section 606 is configured to include and record information created by a content player user for a synchronized multimedia presentation. The information may be annotations, links, meta-elements, image data and audio data in accordance with one embodiment of the invention. The log content player user events section 607 is configured to store records and information pertaining to content player user events and generated by a content player user for a synchronized multimedia presentation in accordance with one embodiment of the invention.

The synchronized presentation viewer 608 comprises an image data viewer area 609 and a master timeline playback area 610 in accordance with an embodiment of the invention. The image data viewer area 609 displays one or more synchronized multimedia presentations. In one embodiment of the invention, the one or more synchronized multimedia presentations are reproduced in a live transmission generated by a content creator user.

The master timeline playback area 610 reproduces the synchronized multimedia presentation containing image data and the audio timeline in accordance with an embodiment of the invention. The master timeline playback area 610 comprises play, forward, backward, pause, stop, adjust sound volume, and vary playback speed of the synchronized multimedia presentation.

The synchronized multimedia presentation sections area 611 comprises one or more sections 612 wherein a content player user selects a preferred point in the master timeline to view the synchronized multimedia presentation. The one or more sections 612 may be generated automatically or manually by a content creator user in accordance with one embodiment of the invention. The one or more sections 612 generated automatically are retrieved from the timestamps 515 generated by the content creator user.

The content player interface 601 illustrated in FIG. 6 represents one of many possibilities to arrange the content creator interface. Nevertheless, in other embodiments, the sections represented in the content creator interface 601 may be arranged in different configurations.

Methods for processing image data to coincide in a point of time with audio data to create a master timeline coordinating audio timeline playback and image data display to generate a synchronized multimedia presentation using content creator and content player modules are now described.

As illustrated in FIGS. 7A-7B, the method 700 begins with a content creator user accessing a synchronization system using a computing device 101 as described in the process 702. In one embodiment of the invention, accessing the synchronization system 103 may occur over a computer network 102.

As illustrated in FIG. 7A, if a content creator user prefers to transmit a synchronized multimedia presentation over a computer network 102 using a live transmission 703:YES, then the method 700 continues with enabling the live transmission 704 in accordance with an embodiment of the invention. The live transmission 703 is configured to transmit and receive one or more synchronized multimedia presentations in a computing device 101 over a computer network 102 allowing playback to start while the rest of the data is still being received while in a live transmission. In the process 704, the live transmission may be configured to delay the transmission and reception of synchronized multimedia presentations to one or more content player users in accordance with an embodiment of the invention.

In the process 705, the one or more audio data is manipulated to assemble an audio timeline containing a sequential order of one or more audio data. The one or more audio data are created by using an apparatus to capture sounds and transfer to the content creator module 201. In one embodiment of the invention, the one or more audio data are listed by transferring the one or more audio data from a computing device 101 to the content creator module 201. In another embodiment of the invention, the one or more audio data is manipulated by equalizing sounds of audio files, adding and/or removing parts of audio files, adding and/or removing sound effects. In another embodiment of the invention, the one or more audio data is reproduced by playing, forwarding, backwarding, pausing, stopping, adjusting sound volume, varying playback speed of audio data.

In the process 705, the one or more audio data are assembled to generate an audio timeline 513 comprising the sum of time durations of the one or more audio data in accordance with an embodiment of the invention. The audio timeline 513 comprises a sequential order of one or more audio data wherein the sequential order may be determined by a content creator user in accordance with an embodiment of the invention.

In the process 706, one or more image data are manipulated to display transformations made as a result of editing. The one or more image data are created using an apparatus to capture images and transfer to the content creator module 201 in accordance with one embodiment of the invention. The one or more image data are listed by transferring the one or more image data from a computing device 101 to the content creator module 201. The one or more image data are manipulated using one or more manipulation tools 506. The one or more manipulation tools comprise resizing, cropping, color manipulating, rotating, including layers with graphical elements, tridimensional manipulations, animations, zooming, sharpening, enhancing, removing blemish, adding tone effect, reversing image, reversing exposure, adding, deleting, and modifying one or more graphical image elements in accordance with one embodiment of the invention. In one embodiment of the invention, the one or more image data transferred to the content creator module 201 may be previewed by a content creator user.

In the process 707, one or more timestamps are generated for an audio timeline to be assigned to each of the image data listed in the content creator module. The audio timestamps 515 represent the current play time for each of the image data listed in a list of image data section 503. Following the process 708, each of the image data listed in the content creator module is assigned a timestamp 513 recording a play time wherein each image is displayed to coincide in time with the audio timeline 513. Then, in the process 709, each image timestamp is connected to the master timeline, thus generating an audio data and image data time relationship.

In the process 710, the master timeline containing image data and the audio timeline generates a digital control file. The digital control file contains records or information concerning one or more manipulations processed in audio data and image data wherein the characteristic integrity of audio data and image data are maintained. The digital control file may use any text file standard, wherein some embodiments may use HTML, XML and Json. The digital control file comprises a text file standard recording a time relationship between the audio timeline and image data.

In the 711 process, one or more meta-elements describing the characteristics of a synchronized multimedia presentation may be added permitting that synchronized multimedia presentations are discoverable by search engines in accordance with one embodiment of the invention. The one or more metaelements comprising records and information stored in the digital control file and audio to text transcription may be text files, meta-tags, titles, author and date to generate meta-elements.

As illustrated in FIG. 7B, if a content creator user prefers to generate audio to text transcriptions 712:YES, then the method 700 continues with enabling the audio to text transcriptions 713 in accordance with an embodiment of the invention. Automatic and manual audio to text transcripts describing the content in an audio timeline may be enabled in accordance with one embodiment of the invention. The manual audio to text transcription comprises a transcription generated by a human operator (e.g., content creator user) transferred from a computing device 101 to the content creator module 201. The automated audio to text transcription comprises a transcription generated using machine learning techniques generated by an application programming interface in accordance with an embodiment of the invention. The manual and automated audio to text transcriptions comprising a text-type file may be recorded in a different file than the digital control file storing the master timeline synchronizing one or more image data with an audio timeline 513. The text-type file generated for audio to text transcriptions is stored in the file system database 203.

In the process 714, the digital control file containing the commands to execute the time relationship between image data and the audio timeline, thus comprising a master timeline is stored in a file system database 203.

As shown in FIG.8, a method 800 for reproducing a master timeline 520 to reproduce a synchronized multimedia presentation containing image data coinciding in time with an audio timeline is described in accordance with an embodiment of the invention. In the process 801, the content player module is accessed by a content player user. Then, a request to the file system database to retrieve the digital control file is generated in 802.

The content player module 202 processing the digital control file in 803 to execute the time relationship between image data and the audio timeline. Then, the master timeline is loaded to proceed with reproducing a synchronized multimedia presentation in 804. In the process 805, the master timeline playback is activated.

As illustrated in FIG. 8, if a content player user prefers to display audio to text transcriptions to be displayed during the time the master timeline playback is being reproduced 806:YES, then the method 800 continues with displaying the audio to text transcriptions.

The content player module 202 may enable the exhibition of meta-elements, records content player user events and distributes content player user events to one or more content player users in accordance with an embodiment of the invention.

In the process 808, the master timeline is reproduced, and the synchronized multimedia presentation is viewed by the content player user.

As illustrated in FIG. 9, an embodiment of a master timeline with one or more image data processed to coincide in a point of time with the audio timeline disclosed herein 901 . The audio timeline reproducing a sequential order of audio data during the time that listed, created and/or manipulated image data are displayed in 908. In some embodiments, an image not having the use of manipulations 902 may be displayed as the first image in the point of time synchronized with the timestamps 517 generated for the audio timeline 513. In the time following, a manipulated version of the first image may be displayed in the point of time synchronized with the timestamps generated for the audio timeline, wherein a section of the first image displaying a close-up with one or more characteristics of the first image are viewed in detail 903. A second image not having the use of manipulations 904 may be displayed in the point of time synchronized with timestamps 517 generated for the audio timeline 513. One or more annotations overlaying the second image wherein some sections may be made visually prominent using hand drawings 905. A third image not having the use of manipulations 906 may be displayed in the point of time synchronized with the timestamps generated for the audio timeline. In the time following, a manipulated version of the third image may be displayed in the point of time synchronized with the timestamps generated for the audio timeline, wherein one or more hyperlinks 907 overlaying the third image allowing a content player user pressing a button or touching a computing device screen to access the content of the one or more hyperlinks 907.

Although the processes of the methods herein are shown and described in a particular order, the order of the processes of the methods may be altered so that certain processes may be performed in an inverse order or so that certain processes may be performed, at least in part, concurrently with other processes. In another embodiment, instructions or sub-processes of distinct processes may be implemented in an intermittent and/or alternating manner.

Although the present disclosure has been illustrated and described with respect to one or more implementations, equivalent alterations and modifications will occur to others skilled in the art upon the reading and understanding of the present disclosure and the annexed drawings. Furthermore, while a particular feature of the present disclosure may have been disclosed with respect to one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described embodiments. Rather, the scope of the present disclosure should be defined in accordance with the following claims and their equivalents.