Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
INTEGRATED RENDERING OF PARALLEL MEDIA
Document Type and Number:
WIPO Patent Application WO/2018/122786
Kind Code:
A1
Abstract:
A client-server system (1) for uploading contents (2) and generating an integrated view (3) of the contents (2), the client-server system (1) comprising a client device (4) and a server device (5), the client device (4) comprising a selection of input units (6) adapted to receive more than one pieces of contents (2), and a client processor (7), the server device (5) comprising a server processor (8), wherein the client processor (7) and the server processor (8) cooperates to receive the contents (2) from the input units (6), to process the contents (2) by integrating the contents (2), and to generate the integrated view (3) of the contents (2) in real-time, and to render the integrated view (3) of the contents (2) on a display unit (9) or an audio unit (10) of the client device (4), or combination thereof.

Inventors:
SHENOY RAJESH (IN)
Application Number:
PCT/IB2017/058504
Publication Date:
July 05, 2018
Filing Date:
December 29, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SHENOY RAJESH (IN)
VARGHESE VINU (IN)
V C PRASANTH (IN)
International Classes:
G06F15/16; G06F9/44; G11B27/00; H04N21/00
Foreign References:
US8433993B22013-04-30
US20130167028A12013-06-27
Attorney, Agent or Firm:
SINGHAL, GAURAV (IN)
Download PDF:
Claims:
We claim:

1. A client-server system (1) for uploading contents (2) and generating an integrated view (3) of the contents (2), the client-server system (1) comprising a client device (4) and a server device (5), the client device (4) comprising a selection of input units (6) adapted to receive more than one pieces of contents (2), and a client processor (7), the server device (5) comprising a server processor (8), wherein the client processor (7) and the server processor (8) cooperates to:

- to receive the contents (2) from the input units (6); - to process the contents (2) by integrating the contents (2), and to generate the integrated view (3) of the contents (2) in real-time; and

- to render the integrated view (3) of the contents (2) on a display unit (9) or an audio unit (10) of the client device (4), or combination thereof.

2. The client-server system (1) according to claim 1, wherein the processors (7, 8) cooperate to process the contents (2) to integrate the contents (2) either in parallel to each other, or in sequence to one after another, or combination thereof.

3. The client-server system (1) according to any of the claims 1 or 2, wherein the contents (2) are either categorized as visual contents (11), or categorized as audio contents (12), or combination thereof, wherein the visual content (11) is defined as the content (2) which is adapted to be rendered using a visual display unit of the client device (4), and the audio content (12) is defined as the content (2) which is adapted to be rendered using a sound unit of the client device (4).

4. The client-server system (1) according to the claim 3, wherein if one of the content (2) is categorized as both the video content (11) and the audio content (12), the processors (7, 8) are adapted to disable integrating of another content (2) categorized as audio content (12) in parallel to the content (2) categorized as both the video content (11) and the audio content (12).

5. The client-server system (1) according to any of the claims 3 or 4, wherein if two or more contents (2) categorized as visual contents (11) are to be integrated in parallel, then: the integrated view (3) is generated by dividing the view (3) into more than one logical sections (13), and atleast two of the contents (2) categorized as visual contents (11) are displayed in each of the logical sections (13), or the integrated view (3) is generated by generating atleast two windows (14), wherein atleast one of the window (14) is placed overlapping onto another window (14), and the visual contents (11) are displayed into the windows (14), or the integrated view (3) is generated by embedding atleast one of the visual content (11) in a predefined location (15) within one or more display frames (16) of another visual content (11), or combination thereof.

6. The client-server system (1) according to any of the claims 3 to 5, wherein the visual content (11) is further categorized as a static content (17), a self-playing dynamic content (18), or a user-driven dynamic content (19), wherein the static content (17) is defined as the content which has a single frame, the self -playing dynamic content (18) is defined as the content which has multiple frames and which is displayed frame by frame automatically, without human intervention, and the user-driven dynamic content (19) is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit (6).

7. The client-server system (1) according to the claim 6, wherein the processors (7, 8) are adapted to receive a duration input (20) related to duration of display of the content (2) categorized as static content (17) or user-driven dynamic content (19) from one of the input units (6), and adapted to process the content (17, 19) to keep the duration of the content (17, 19) categorized as static content (17) or user-driven dynamic content (19) in the integrated view based on the duration input (20).

8. The client-server system (1) according to any of the claims 3 to 7, wherein the client device (4) comprises a user control (21) to receive a control input (22), wherein on receiving the control input (22), the processors (7, 8) are adapted to:

- stop, start, pause, or resume rendering of the self-playing dynamic content (18), or

- change size of the logical section (13) of the integrated view (3), or -change placement and/or size of the window (14) of the integrated view (3).

9. The client-server system (1) according to any of the claims 1 to 8 comprising a memory unit (23) adapted to receive the integrated view (3) of the contents (2).

10. The client-server system (1) according to any of the claims 1 to 9, wherein atleast one of the input device (6) is an audio recorder (24), a video recorder (25), an image capturing device (26), or combination thereof, for recording one or more contents in real-time.

11. The client-server system (1) according to any of the claims 1 to 10 comprising a memory unit (23) adapted to receive the integrated view (3) of the contents (2).

12. The client-server system (1) according to any of the claims 1 to 11, wherein atleast one of the input unit (6) is an audio recorder (24), a video recorder (25), an image capturing device (26), or combination thereof, for recording one or more contents in real-time.

13. A client device (4) for facilitating generation of an integrated view (3) of more than one piece of contents (2), the client device (4) comprising:

- an input unit (6) adapted to receive more than one pieces of contents (2) and a content alignment input (27) related to aligning of the contents (2) to each other;

- a client processor (7) adapted to receive the contents (2) from the input unit (6) and the content aligning input (27), and to sent the contents (2) and the content alignment input (27) to a server processor (8) of a server device (5), wherein the server processor (8) is adapted to process the contents (2) by integrating the contents (2) and to generate the integrated view (3) of the contents (2) in real-time.

14. The client device (4) according to the claim 13, wherein the content alignment input (27) relates to aligning the contents (2) to each other either in parallel to each other, or in sequence to one after another, or combination thereof.

15. Thee client device (4) according to the claim 14, wherein if the content alignment input (27) is for aligning the contents (2) in parallel, and the contents (2) are visual, then the content alignment input (27) relates to:

- dividing the integrated view (3) into multiple logical sections (13), and each of the visual contents (11) are displayed in each of the logical sections (13), or

- generating multiple windows (14), wherein atleast one of the window (14) is placed overlapping onto another window (14), and the visual contents (11) are displayed into the windows (14), or

- embedding atleast one of the visual content (11) in a predefined location (15) within one or more display frames (16) of another visual content (11), or combination thereof.

16. The client device (4) according to any of the claims 13 to 15, wherein the input unit (6) is adapted to receive a duration input (20) related to duration of atleast one of the contents (2) in the integrated view (3), and the server processor (8) is adapted to receive the duration input (20) via the client processor (7), and to process the contents (2) to keep the duration of the one or more contents (2) based on the duration input (20).

17. The client device (4) according to any of the claims 13 to 16, wherein atleast one of the input unit (6) is an audio recorder (24), a video recorder (25), an image capturing device (26), or combination thereof, for recording one or more contents in real-time.

18. The client device (4) according to any of the claims 13 to 17 comprising:

- a display unit (9) adapted to receive and render a visual part (28) of the integrated view (3), or

- an audio unit (10) adapted to receive and render an audio part (29) of the integrated view (3), or combination thereof.

19. The client device (4) according to the claim 18 comprises a user control (21) to receive a control input (22), wherein on receiving the control input (22), the processors (7, 8) are adapted to:

- stop, start, pause, or resume rendering of the self-playing dynamic content (18), or

- change size of the logical section (13) of the integrated view (3), or -change placement and/or size of the window (14) of the integrated view (3).

20. A computer program product stored on a non-transitory device, the computer program product adapted to be executed on one or more processors (7, 8) placed in a client-server environment and on execution adapted to enable the processor/s (7, 8):

- to receive the contents (2) from the input units (6);

- to process the contents (2) by integrating the contents (2), and to generate the integrated view (3) of the contents (2) in real-time; and

- to render the integrated view (3) of the contents (2) on a display unit (9) or an audio unit (10) of the client device (4), or combination thereof.

Description:
Title of Invention

Integrated rendering of parallel media Field of Invention

The invention relates to rendering contents. More specifically, the invention relates to integrating of contents and rendering of the contents in an integrated fashion. Background

As a human, we always tend to create contents while carrying out various activities in life. Many times by capturing audio or video of our day to day life activities, of environment around us, of our special occasions, etc. However, these captured contents many times are not up to the mark or our re uirements. We need to edit them to be as per our requirements, and more often, we need to merge/integrate them to be in a proper representable format of our requirements and feelings. Editing and mixing contents are tedious task which requires specific skills over particular tools or software to do so. Also, another issue is, these software may be good at editing or mixing contents of similar type like, just audio, or just video, or just images, however these tools may be incapable to handle different types of contents.

In addition, in current fast-moving world, the contents created are shared across the world in real time, almost instantaneously, i.e., there is almost no time left to test integration of the contents and they are required to be integrated without time delay as they are being created and in a qualitative fashion, so that they can be rendered almost in real time, as they are being created.

Object of the Invention It is an object of the invention to provide a real time integration of various types of contents And rendering of the integrated contents. Summary of the Invention

The object of the invention is achieved by client-server system for uploading contents and generating an integrated view of the contents, the client-server system comprising a client device and a server device, the client device comprising a selection of input unit adapted to receive more than one pieces of contents, and a client processor, the server device comprising a server processor. The client processor and the server processor cooperate to receive the contents from the input unit, to process the contents by integrating the contents and to generate the integrated view of the contents in real-time, and to render the integrated view of the contents on a display unit or an audio unit of the client device, or combination thereof. This embodiment is helpful as it provides for real-time integration of the contents which are received from the user in the real-time, and make the integrated view available for rendering either in real-time or for later retrieval.

According to one embodiment of the client-server system, wherein the processors cooperate to process the contents to integrate the contents either in parallel to each other, or in sequence to one after another, or combination thereof. This embodiment is helpful, as it enables integration of the contents as desired, either in parallel or one after another, or some contents in parallel to each other and some contents in sequence one after another.

According to another embodiment of the client- server system, wherein the contents are either categorized as visual contents, or categorized as audio contents, or combination thereof, wherein the visual content is defined as the content which is rendered using the display unit of the client device, and the audio content is defined as the content which is rendered using the audio unit of the client device. This embodiment is helpful, as it categorizes claims in audio and visual, so that the processors can process the contents based on the categorization. Also, such categorization is helpful to predefine requirement of a visual display unit or an audio unit based on the categorization. According to yet another embodiment of the client-server system, where wherein if one of the content is categorized as both the video content and the audio content, the processors are adapted to disable integrating of another content categorized as audio content in parallel to the content categorized as both the video content and the audio content. This embodiment is helpful, as it helps in avoiding overlapping of the audio. It is a fact that overlapping of audio results in fusion, and individuality of the audio contents are left out, hence, such kind of overlapping is disabled by the processors in this embodiment. According to a further embodiment of the client-server system, wherein if two or more contents categorized as visual contents are to be integrated in parallel, then the integrated view is generated by dividing the view into multiple logical sections, and each of the contents categorized as visual contents are displayed in each of the logical sections, or the integrated view is generated by generating multiple windows, wherein atleast one of the window is placed overlapping onto another window, and the visual contents are displayed into the windows, or

the integrated view is generated by embedding atleast one of the visual content in a predefined location within one or more display frames of another visual content, or combination thereof. This embodiment is helpful as it provides for mechanism to integrate the visual contents in parallel, so that individuality if each of the visual contents in maintained, and they can be integrated in most comprehensive fashion.

According to one embodiment of the client-server system, wherein the visual content is further categorized as a static content, a self-playing dynamic content, or a user-driven dynamic content, wherein the static content is defined as the content which has a single frame, the self -playing dynamic content is defined as the content which has multiple frame and which is displayed frame by frame automatically, without human intervention, and the user-driven dynamic content is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit. This embodiment is helpful, as categorizing the visual contents in this fashion allows processor to process the contents as per their types while generating the integrated view. Even, such type of categorization may be required, if the processor has certain predefined rules for each type of these contents. Also, based on such type of predefined categorization, the processor may request certain inputs relevant for a particular category of the content, when the user is uploading that particular category of the content.

According to another embodiment of the client-server system, wherein the processor is adapted to receive a duration input related to duration of display of the content categorized as static content or user-driven dynamic content from one of the input unit, and adapted to process the content to keep the duration of the content categorized as static content or user- driven dynamic content in the integrated view based on the duration input. This embodiment allows the processors to keep the duration of contents which do not have durations aligned to it, so that the integrated view can be duration bound.

According to a further embodiment of the client- server system, wherein the client device comprises a user control to receive a control input, wherein on receiving the control input, the processors are adapted to stop, start, pause, or resume rendering of the self-playing dynamic content, or change size of the logical sections of the integrated view, or change placement and/or size of the windows of the integrated view. This embodiment is helpful, as it provides certain control to a viewer of the integrated view to customize or control the viewing/hearing experience as per his requirements.

According to yet another embodiment of the client-server system, the client-server system comprising a memory unit adapted to receive the integrated view of the contents. This embodiment is helpful, as it provides an option for rendering the integrated view later on whenever it is required.

According to another further embodiment of the client-server system, wherein atleast one of the input device is an audio recorder, a video recorder, an image capturing device, or combination thereof, for recording one or more contents in real-time. This facilitates for real time recording of the contents, and uploading the same in real-time for generating the integrated view in real-time. Brief Description of Drawings

Fig. 1 illustrates a schematic diagram of a client-server system, according to an embodiment of the invention. Fig. 2 shows a client device with options for uploading of the contents for generating an integrated view of the contents. Fig. 3 shows placement of the visual contents into different logical sections, according to an embodiment of the invention.

Fig. 4 shows placement of the visual contents in overlapping fashion, according to an embodiment of the invention.

Fig. 5 shows embedding of one of the visual content at a predefined location in another visual content, according to an embodiment of the invention. Detailed Description:

The best and other modes for carrying out the present invention are presented in terms of the embodiments, herein depicted in Drawings provided. The embodiments are described herein for illustrative purposes and are subject to many variations. It is understood that various omissions and substitutions of equivalents are contemplated as circumstances may suggest or render expedient, but are intended to cover the application or implementation without departing from the spirit or scope of the present invention. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect.

The terms "a" and "an" herein do not denote a limitation of quantity, but rather denote the presence of at least one of the referenced item. The invention focuses on providing a way for integrating different form of media in real time to provide an opportunity for a user to create a thoughtful integration of contents of his own using various possible formats in real-times, so that the contents so integrated can be shared to any one fatly and in real-time. This is really helpful in social media framework environment, where the user wants to share his feelings and contents quickly with other users on the social media. However, in current scenario, either the user has to use various editing tools to do so, which means the content integration cannot be real-time, or the user has to use limited means to post the contents which allows only one format of the contents to be posted at once. Hence, the focus of the invention is to enable user for real-time integration of the contents to form an integrated view, so that he can share the integrated view with any one in almost in real-time.

Fig. 1 shows a schematic diagram of a client-server system 1 which is placed in a social media framework. The client- server system 1 includes a client device 4 and a server device 5 which are connected to each other over a data network allowing exchange of data. It is to be noted that there may be more than one client devices 4 which can be connected to the server device 5 at one time. The client device 4 includes a client processor 7 and a selection of input units 6 are either connected to the client device 4 through an external data connection, physical or wireless, or are part of the client device 4 itself. The input units 6 can be a video recorder 25, an audio recorder 24, an image capturing device 26, keyboard, mouse, touch screen, gesture input unit, etc. The input units 6 are adapted to receive various kind of inputs related to the social media framework, like loading of contents 2, fetching of contents 2, a duration for which the content 2 should run after integration, alignment of the contents 2 with respect to each other after integration, controlling the integrated view 3 of the contents, etc.

The server device 5 includes a server processor 8, which receives the contents 2 from the client device 4 which were received from the input units 6 by the client processor 7. Further the server processor 8 processes the contents 2 by integrating the contents 2 and generates the integrated view 3 of the contents 2, where the contents 2 are aligned parallel to each other or in sequence to each other. It should be noted that all the contents need not be in parallel or in sequence to each other, rather, at a point of time only some of the contents 2 from the set of contents 2 received may be in parallel. The alignment is also with respect to the time frame in which the contents are to be placed and for how much duration.

Once the integrated view 3 is generated by the server processor 8, the server processor 8 stores the integrated view 3 in a memory unit 23. Also, the integrated view 3 is also available at the same time of being generated to be sent to the client device 4 which have uploaded the contents 2 for integration. The integrated view 3 can also be made available by any other client device 4 for viewing at a later point of time. And, in such case, when any of the client device 4 requests the integrated view 3 via there client processor 7, the server processor 8 accesses the memory unit 23, and retrieves the integrated view 3, and sends the same to the client device 4, which further renders it onto a display unit 9 of the client device 4. In case, the integrated view 3 also has audio part 29, same shall be rendered on an audio unit 10. There may be a possibility, that the integrated view may just have only visual part 28 or audio part 29, and accordingly only the display unit 9 or the audio unit 10 is used for rendering the integrated view 3. The contents 2 may be categorized as visual contents 11 or the audio content 12. Some contents can be categorized as both the visual content 11 and the audio content 12, such as a video. This kind of categorization is helpful for applying integration of the contents, as the server processor 8 may use such categorization for optimal and comprehensive integration of the contents 2 for producing a comprehensive integrated view 3 of the contents 2.

In one embodiment, if one of the content 2 is categorized as both the video content 11 and the audio content 12, the server processors 8 disables integrating of another content 2 categorized as audio content 12 in parallel to the content 2 categorized as both the video content 11 and the audio content 12. For audio, division of audio unit 10 to provide rendering of the audio content 10 in parallel is not possible, hence to increase comprehensiveness of the integrated view 3, such restriction on the contents 2 is desired. In one embodiment, the contents 2 are not categorized as audio content 12 or video content 11, and the server processor 8 just uses it's prerogative to process the contents in best possible way to provide a comprehensive integrated view 3.

For integrating the visual contents 11 in parallel, the server processors can use various mechanisms, like dividing the view 3 in logical sections 13 and rendering the visual contents 11 in each of these logical sections 13, or placing the visual contents 11 in overlapping windows 14 in the view 3, or embedding a visual content 11 in a predefined location 15 within one or more frames of another visual content 11. The integrated view 3 can have contents 11 being represented through all three mechanisms, i.e., logical sections, windows and embedding, even at any one particular time frame of rendering of the integrated view 3. In one embodiment, the server processor 8 can use any other way for parallel rendering of the visual contents 11.

The visual contents 11 are further categorized as a static content 17, a self-playing dynamic content 18, or a user-driven dynamic content 19. The static content 17 is defined as the content which has a single frame while presenting the content 17 in the integrated view 3. Good example of the static content is an image. The self -playing dynamic content 18 is defined as the content which has multiple frames and which is displayed frame by frame automatically, without human intervention. Example of the self-playing dynamic contents 18 are video and GIF file format. The user-driven dynamic content 19 is defined as the content which changes a currently displayed part of the content completely or partially based on a user input received from one of the input unit 6. Some of the examples of user-driven dynamic contents are web page, word document, PDF, any other scrollable or clickable file formats, etc. In one embodiment, such categorization of visual contents 11 is not provided, and the server processor 8 uses it's own prerogative to process the visual contents 11, to be best suited for creatine comprehensive integrated view 3.

When a user is providing the contents 2 to be integrated, the user also provides a duration input 20 to the input unit 6, related to duration of display of the content 2 which are categorized as static content 17 or user-driven dynamic content 19. The server processor receives the duration input 20 along with the contents 17, 19 and processes the contents 17, 19 based on the duration input 20 to keep the duration of the content 17, 19 in the integrated view based on the duration input 20. In one embodiment, the client processor 4 is not enabled to receive the duration input 20, and the server processor 8 uses certain rules, or any of its prerogatives to determine the duration of each of the contents 2 within the integrated view. The client device 4 also includes a user control 21 and the client processor 7 receives a control input 22 from the user control 21. The client processor 7 processes the control input 22 and performs control on self-playing dynamic contents 18, or on logical sections 13 or the windows 14 of the integrated view 3, or all of them. Based on the control input 22, the client processor 7 can stop, start, pause, or resume rendering of the self-playing dynamic content 18. The client processor 7 can change size of more than one logical sections 13 based on the control input 22. Also, placement of the windows 14 or size of the windows can also be changes due to the control input 22. In one embodiment, the control unit 21 is not provided, and the user just views or hears the integrated view 3 without any control. The client processor 7 also receives an alignment input 27 from one of the input unit 6 for defining how the contents 2 are to be aligned to each other. Based on the alignment input 27, which is received by the server processor 8 from the client processor 7, the server processor 8 aligns the contents 2 to be in parallel or in sequence to each other, in a particular way by embedding, using windows or logical sections for parallel alignment, etc. In one embodiment, the alignment input 27 is not provided, and rather the server processor 8 itself based on certain rules, or any other prerogatives align the contents 2.

Fig. 2 shows an exemplary embodiment of a client device 4 with options 109, 110, 111 for uploading of the contents for generating an integrated view of the contents. A client device 4 is shown which has a time line 112 shown for uploading of the contents. The time lines are shown having segments 101, 102, 103, 104 which represents various contents already loaded, while segment 105 refers to the content currently being loaded. A cursor 106 is also shown which represents total duration of the contents already loaded on the timeline 112. By looking at the cursor 106, remaining duration of the contents to be loaded can also be identified. It is to be noted, as more contents keeps on loading, length of the timeline 112 keeps on increasing, and if any of the contents are deleted, the timeline 112 shortens. For uploading the contents, the user of the client device 4 clicks on the "+" sign 107 on the screen which is representative of user's intention for loading the contents. Once the "+" sign 107 is clicked, an option box 108 is opened which has options for loading the contents. The options are for loading emojis 109, media file 110 from the memory of the client device 4, weblink 111 from world wide web. It is to be noted that live recording of the contents can also be enabled in the option box 108 for recording audio, video or image. This is one implementation of the uploading of the contents for integration, however, there can be other mechanisms used for implementing uploading contents for integration.

Fig. 3 shows placement of the visual contents 201, 202, 203 into different logical sections 204, 205, 206, according to an embodiment of the invention. For parallel rendering of the visual contents, the server processor divides the integrated view 3 into three logical sections 204, 205, 206 and each of the three visual contents 201, 202, 203 are rendered in parallel in these three-logical section 204, 205, 206

Fig. 4 shows placement of the visual contents 301, 302, 303 in overlapping fashion, according to an embodiment of the invention. For parallel rendering of the contents, one of the content 301 is rendered on full screen of the integrated view, and for rendering the contents 302, and 303 in parallel to the content 301, the server processor provides for two windows 304, 305, and the contents 302, and 303 are rendered in these two windows 304, 305 respectively. As per, either the systems prerogative or the user's alignment input, the windows 304, and 305 overlaps, such that window 305 overlaps onto one of the corner of window 304. The windows 304, 305 can be of different sizes.

Fig. 5 shows embedding of one of the visual content 402 at a predefined location 15 in another visual content 401, according to an embodiment of the invention. This is another way of parallel rendering of the contents 401, 402. The server processor embeds one of the visual contents 402 to be rendered in parallel onto another content 401, in a frame 16 of the content 401 at a predefined location 15. While specific language has been used to describe the invention, any limitations arising on account of the same are not intended. As would be apparent to a person skilled in the art, various working modifications may be made to implement the inventive concept as taught herein. The figures and the foregoing description give examples of embodiments. Those skilled in the art will appreciate that one or more of the described elements may well be combined into a single functional element. Alternatively, certain elements may be split into multiple functional elements. Elements from one embodiment may be added to another embodiment. For example, order of processes described herein may be changed and are not limited to the manner described herein. Moreover, the actions of any flow diagram need not be implemented in the order shown; nor do all of the acts need to be necessarily performed. Also, those acts that are not dependent on other acts may be performed in parallel with the other acts. The scope of embodiments is by no means limited by these specific examples.

List of reference numerals

1 client server system

2 contents

3 integrated view

4 client device

5 server device

6 input unit

7 client processor

8 server processor

9 display unit

10 audio unit

11 visual content

12 audio content

13 logical section

14 window

15 predefined location

16 frame

17 static content

18 self -playing dynamic content

19 user-driven dynamic content

20 duration input

21 user control

22 control input

23 memory unit

24 audio recorder

25 video recorder

26 image capturing device

27 content alignment input

28 visual part

29 audio part

101, 102, 103, 104 segments of contents already uploaded

105 segment of content being uploaded

106 cursor

107 input for showing option box to upload contents 108 option box

109 emojis

110 media files

111 weblinks

112 timeline

201, 202, 203 three contents to be rendered in parallel in logical sections

204, 205, 206 three logical sections for rendering the contents in parallel

301 one of the content on the full screen of the integrated view

302, 303 two contents to be rendered in parallel in windows 304, 305 two of the windows for parallel rendering of contents

401 content embedding another content

402 embedded content