Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DEVICES, SYSTEMS AND METHODOLOGIES CONFIGURED TO ENABLE GENERATION, CAPTURE, PROCESSING, AND/OR MANAGEMENT OF DIGITAL MEDIA DATA
Document Type and Number:
WIPO Patent Application WO/2018/201195
Kind Code:
A1
Abstract:
The invention relates to devices, systems, and methods configured to enable generation, capture, processing, and/or management of digital media data. A device (100) according to one embodiment includes a media capture module (101) and a user interface control module (103) configured to cause display, on the device (100), of a user interface that displays video data provided by the media capture module (101). The displayed video data is selectively captured by operation of a capture control module (104), which causes storing of video data in a media storage module (105).

Inventors:
DE SALDANHA DEON CHRISTOPHER (AU)
LEONG SAO IAN (AU)
RISTOV LUPCO (AU)
DIMESKI JOHN (AU)
Application Number:
PCT/AU2018/050407
Publication Date:
November 08, 2018
Filing Date:
May 04, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
5I CORPORATION PTY LTD (AU)
International Classes:
G06F3/00; G06T7/30; G11B27/02; H04H60/33; H04N21/234; H04N21/8549
Foreign References:
US20120324491A12012-12-20
US20160189752A12016-06-30
US20140125835A12014-05-08
US20160191722A12016-06-30
Attorney, Agent or Firm:
SHELSTON IP PTY LTD (AU)
Download PDF:
Claims:
CLAIMS

1. A device configured to perform automated media capture, the device including: an input configured to receive media data from a capture module, thereby to enable capture of media data;

a storage module configured to store media data received via the input;

a secondary measurement data input module that is configured to receive, from one or more sensor devices, real time data representative of physically observable attributes; and

a capture trigger module that is configured to identify a defined trigger condition being satisfied via the secondary measurement data and, in response, trigger storage in the storage module of media data generated by the capture module.

2. A device according to claim 1 wherein the physically observable attributes are representative of a subject's physical and/or psychological condition.

3. A device according to claim 1 or claim 2 including an output that provides a physical alert output in response to observation of the defined trigger condition.

4. A device according to any one of the preceding claims wherein there is a plurality of defined trigger conditions.

5. A device according to any one of the preceding claims including a buffer module that is configured to maintain a buffer of media data generated via the capture module, wherein triggering storage in response to the trigger condition includes causing storage of a portion of buffered media preceding the trigger condition and a portion of media following the trigger condition.

6. A system configured to perform automated media processing, the system including:

an input configured to receive media data from a capture module, thereby to enable capture of media data;

a storage module configured to store media data received via the input; a secondary measurement data input module that is configured to receive, from one or more sensor devices, real time data representative of physically observable attributes;

a secondary measurement management module that is configured to associate with stored media data from a time To to a time Tx a stream of secondary measurement derived data synchronously associated with the time To to the time Tx; and

a media processing module that is configured to perform operations on the media from with the time To to the time Tx based on the stream of secondary measurement data.

7. A system according to claim 6 wherein the operations include a limited playback operation whereby playback is limited substantially by reference to one or more portions of the media data between To and Tx associated with secondary measurement data having defined characteristics.

8. A system according to claim 6 or claim 7 wherein the operations include a showreel generation operation whereby a showreel media data set if compiled by trimming and merging of a plurality of portions of the media data between T0 and Tx associated with secondary measurement data having defined characteristics.

9. A system according to any one of claims 6 to 8 wherein the operations include a trimming operation whereby one or more media files are generated by trimming the media between To and Tx by reference to one or more portions of the media data between To and Tx associated with secondary measurement data having defined characteristics.

10. A system according to any one of claims 6 to 9 wherein the operations include a content generation operation whereby a media file generated by: trimming the media between To and Tx by reference to one or more portions of the media data between To and Tx associated with secondary measurement data having defined characteristics within a first range; trimming the media between To and Tx by reference to one or more portions of the media data between To and Tx associated with secondary measurement data having defined characteristics within a second range; and generating media including the trimmed portions in sequence, with the trimmed potions from secondary measurement data having defined characteristics within the first range at normal speed and trimmed potions from secondary measurement data having defined characteristics within the second range in slow motion.

11. A system according to any one of claims 6 to 10 wherein the secondary measurement data is derived from one or more sensors that measure physical attributes of an actor involved in a scene of which the media between To and Tx is representative.

12. A system according to any one of claims 6 to 11 wherein the secondary measurement data is derived from one or more sensors that measure physical attributes of a plurality of human subjects individually.

13. A system according to any one of claims 6 to 12 wherein the secondary measurement data is derived from one or more sensors that measure physical attributes of a plurality of human subjects collectively.

14. A system according to any one of claims 6 to 13 wherein the secondary measurement data is representative of physiological and/or psychological conditions of one or more subjects.

15. A system configured to perform automated media processing, the system including:

an input configured to receive media data from a capture module, thereby to enable capture of media data;

a storage module configured to store media data received via the input;

a secondary measurement data input module that is configured to receive, from one or more sensor devices, real time data representative of physically observable attributes;

a secondary measurement management module that is configured to associate with stored media data from a time To to a time Tx a stream of secondary measurement derived data synchronously associated with the time To to the time Tx; and

a secondary measurement data embedding module that is configured to embed the stream of secondary measurement data into a media data file stored in the storage module representative of time To toTx, wherein the embedding is persistent thereby to be unaffected by one or more post-production editing operations.

16. A system according to claim 15 wherein the post production editing operations including trimming and merging of clip segments, and exporting of a new media file.

17. A system according to claim 15 or claim 16 wherein the embedding includes subliminal embedding.

18. A system according to any one of claims 15 to 17 wherein including encoding the secondary measurement data as metadata in a multimedia container associated with the stored media data.

19. A system according to any one of claims 15 to 18 wherein the secondary measurement data is representative of physiological and/or psychological conditions of one or more subjects.

20. A method configured to enable streamlined editing of media data, the method including:

causing rendering, on a digital display, of a stream of media data;

providing a physical input configured to allow user generation of a trigger event during the rendering of the stream of media data;

in response to user generation of a trigger event at a time Tx, defining trigger event data, wherein the trigger event data is repetitive of a clip segment defined relative to a media timeline of the stream of media data from Τ(χ.Υ) to Tx.

21. A method according to claim 20 wherein Y represents a period of time.

22. A method according to claim 21 including providing physical input that enables user defining of Y.

23. A method according to claim 22 wherein a graphical user interface object that enables defining of Y is presented with user prompting for a predefined period following interaction with the physical input configured to allow user generation of a trigger event during the rendering of the stream of media data.

24. A method according to any one of claims 20 to 23 including causing automated generation of a showreel based on a plurality of trigger events defined for the stream of media data.

25. A method according to any one of claims 20 to 24 including causing filtered playback of the stream of media data, wherein the filtered playback shows substantially only clip segments associated with user defined trigger events.

26. A method according to claim 25 wherein, during the filtered playback, a start time object is provided thereby to enable real-time user defining of clip start times via a single click interaction, thereby to facilitate further trimming of a clip segment from Τ(χ.γ) to Τχ to T(X-Y+Z) to Τχ, wherein Z represents elapsed time between Τ(χ.γ) and a point in time identified by the single click interaction.

27. A method according to claim 26 wherein the point in time identified by the single click interaction precedes the point in time interaction by a defined period.

Description:
DEVICES, SYSTEMS AND METHODOLOGIES CONFIGURED TO ENABLE GENERATION, CAPTURE, PROCESSING, AND/OR MANAGEMENT OF DIGITAL MEDIA DATA

FIELD OF THE INVENTION

[0001] The present invention relates to digital media capture and processing, and for example is embodied in technology such as devices, systems and methodologies configured to generate, capture, process, and/or manage digital media data.

BACKGROUND

[0002] Any discussion of the background art throughout the specification should in no way be considered as an admission that such art is widely known or forms part of common general knowledge in the field.

[0003] Media content creation commences with a media capture phase, whereby media (for example video and/or audio) is captured using a physical capture device (for example a digital camera) and stored to a recording medium (for example computer memory). This stored media is referred to as "captured" media, and the process of operating hardware to cause the operation of the physical capture device and storage functions is referred to as "capturing".

[0004] Once media is captured, post capture editing is ideally required to remove unengaging content and highlight the content that the creator want to showcase. For example, this conventionally includes operating a computer program to access a discrete file ("clip") of captured media data, trim that clip (for example by removing unwanted portions of media from a media timeline) to define one or more trimmed clip portions, and merge clip portions in a desired order (including clip portions from that trimmed clip and/or from other trimmed clips defined from other discrete media files).

[0005] If captured media content is not edited before sharing, or even archiving for later editing and/or viewing, then a viewer (or other media consumer) is often left bored by many unengaging preludes leading up to the various segments of interest. The result is something far less impactful than the creator would like, and in the case of content that is socially or publicly shared, it may also result in lower ratings, less social media likes and shares, and as applicable, less affiliate advertising, subscription fee, video rental or box office income. [0006] Editing of raw content via known editing technologies predominately involves labour intensive operations, requiring a user to view or otherwise navigate through extended periods of captured footage when identifying precise portions that are or interest (and trim the media to define clip portions accordingly). An ongoing technical problem in the context of digital media processing is reducing the extent of manual interaction required, thereby to enable a computer to perform automated processes which increase efficiency in the media post capture editing process.

SUMMARY OF THE INVENTION

[0007] According to a first aspect of the invention, there is provided a device configured to perform automated media capture, the device including:

an input configured to receive media data from a capture module, thereby to enable capture of media data;

a storage module configured to store media data received via the input; a secondary measurement data input module that is configured to receive, from one or more sensor devices, real time data representative of physically observable attributes; and

a capture trigger module that is configured to identify a defined trigger condition being satisfied via the secondary measurement data and, in response, trigger storage in the storage module of media data generated by the capture module.

[0008] Preferably, the physically observable attributes are representative of a subject's physical and/or psychological condition.

[0009] The device preferably includes an output that provides a physical alert output in response to observation of the defined trigger condition. Preferably, there is a plurality of defined trigger conditions.

[0010] Preferably, the device includes a buffer module that is configured to maintain a buffer of media data generated via the capture module, wherein triggering storage in response to the trigger condition includes causing storage of a portion of buffered media preceding the trigger condition and a portion of media following the trigger condition.

[001 1] According to a second aspect of the invention, there is provided a system configured to perform automated media processing, the system including: an input configured to receive media data from a capture module, thereby to enable capture of media data;

a storage module configured to store media data received via the input; a secondary measurement data input module that is configured to receive, from one or more sensor devices, real time data representative of physically observable attributes; a secondary measurement management module that is configured to associate with stored media data from a time To to a time T x a stream of secondary measurement derived data synchronously associated with the time To to the time T x ; and

a media processing module that is configured to perform operations on the media from with the time To to the time T x based on the stream of secondary measurement data.

[0012] Preferably, the operations include a limited playback operation whereby playback is limited substantially by reference to one or more portions of the media data between To and Tx associated with secondary measurement data having defined characteristics.

[0013] Preferably, the operations include a showreel generation operation whereby a showreel media data set if compiled by trimming and merging of a plurality of portions of the media data between T 0 and T x associated with secondary measurement data having defined characteristics.

[0014] Preferably, the operations include a trimming operation whereby one or more media files are generated by trimming the media between T 0 and T x by reference to one or more portions of the media data between To and T x associated with secondary measurement data having defined characteristics.

[0015] Preferably, the operations include a content generation operation whereby a media file generated by: trimming the media between To and T x by reference to one or more portions of the media data between To and T x associated with secondary measurement data having defined characteristics within a first range; trimming the media between To and T x by reference to one or more portions of the media data between To and T x associated with secondary measurement data having defined characteristics within a second range; and generating media including the trimmed portions in sequence, with the trimmed potions from secondary measurement data having defined characteristics within the first range at normal speed and trimmed potions from secondary measurement data having defined characteristics within the second range in slow motion. [0016] Preferably, the secondary measurement data is derived from one or more sensors that measure physical attributes of an actor involved in a scene of which the media between To and Τχ is representative.

[0017] Preferably, the secondary measurement data is derived from one or more sensors that measure physical attributes of a plurality of human subjects individually.

[0018] Preferably, the secondary measurement data is derived from one or more sensors that measure physical attributes of a plurality of human subjects collectively.

[0019] Preferably, the secondary measurement data is representative of physiological and/or psychological conditions of one or more subjects.

[0020] According to a third aspect of the invention, there is provided a system configured to perform automated media processing, the system including:

an input configured to receive media data from a capture module, thereby to enable capture of media data;

a storage module configured to store media data received via the input; a secondary measurement data input module that is configured to receive, from one or more sensor devices, real time data representative of physically observable attributes;

a secondary measurement management module that is configured to associate with stored media data from a time T 0 to a time T x a stream of secondary measurement derived data synchronously associated with the time To to the time T x ; and

a secondary measurement data embedding module that is configured to embed the stream of secondary measurement data into a media data file stored in the storage module representative of time To to T x , wherein the embedding is persistent thereby to be unaffected by one or more post-production editing operations.

[0021] Preferably, the post production editing operations including trimming and merging of clip segments, and exporting of a new media file.

[0022] Preferably, the embedding includes subliminal embedding.

[0023] Preferably, the system includes encoding the secondary measurement data as metadata in a multimedia container associated with the stored media data. [0024] Preferably, the secondary measurement data is representative of physiological and/or psychological conditions of one or more subjects.

[0025] According to another aspect of the invention, there is provided a method configured to enable streamlined editing of media data, the method including:

causing rendering, on a digital display, of a stream of media data;

providing a physical input configured to allow user generation of a trigger event during the rendering of the stream of media data;

in response to user generation of a trigger event at a time Τχ, defining trigger event data, wherein the trigger event data is repetitive of a clip segment defined relative to a media timeline of the stream of media data from Τ(χ.γ) to Τχ.

[0026] Preferably, Y represents a period of time. Preferably, the method includes providing physical input that enables user defining of Y. Preferably, a graphical user interface object that enables defining of Y is presented with user prompting for a predefined period following interaction with the physical input configured to allow user generation of a trigger event during the rendering of the stream of media data.

[0027] Preferably, the method includes causing automated generation of a showreel based on a plurality of trigger events defined for the stream of media data.

[0028] Preferably, the method includes causing filtered playback of the stream of media data, wherein the filtered playback shows substantially only clip segments associated with user defined trigger events. Preferably, during the filtered playback, a start time object is provided thereby to enable real-time user defining of clip start times via a single click interaction, thereby to facilitate further trimming of a clip segment from Τ(χ.γ) to Τχ to Τ(χ.γ+ζ) to Τχ, wherein Z represents elapsed time between Τ(χ.γ) and a point in time identified by the single click interaction. Preferably, the point in time identified by the single click interaction precedes the point in time interaction by a defined period.

[0029] Embodiments of the invention include devices and frameworks described herein (and aspects/elements thereof), methods described herein (and aspects/elements thereof) and computer program products and/or non-transitory carrier medium for carrying computer executable code that, when executed on a processor, causes the processor to perform a method as described herein. [0030] Reference throughout this specification to "one embodiment", "some embodiments" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases "in one embodiment", "in some embodiments" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment, but may. Furthermore, the particular features, structures or characteristics may be combined in any suitable manner, as would be apparent to one of ordinary skill in the art from this disclosure, in one or more embodiments.

[0031] As used herein, unless otherwise specified the use of the ordinal adjectives "first", "second", "third", etc., to describe a common object, merely indicate that different instances of like objects are being referred to, and are not intended to imply that the objects so described must be in a given sequence, either temporally, spatially, in ranking, or in any other manner.

[0032] In the claims below and the description herein, any one of the terms comprising, comprised of or which comprises is an open term that means including at least the elements/features that follow, but not excluding others. Thus, the term comprising, when used in the claims, should not be interpreted as being limitative to the means or elements or steps listed thereafter. For example, the scope of the expression a device comprising A and B should not be limited to devices consisting only of elements A and B. Any one of the terms including or which includes or that includes as used herein is also an open term that also means including at least the elements/features that follow the term, but not excluding others. Thus, including is synonymous with and means comprising.

[0033] As used herein, the term "exemplary" is used in the sense of providing examples, as opposed to indicating quality. That is, an "exemplary embodiment" is an embodiment provided as an example, as opposed to necessarily being an embodiment of exemplary quality.

BRIEF DESCRIPTION OF THE DRAWINGS

[0034] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings in which:

[0035] FIG. 1A and FIG. 1 B illustrate devices according to embodiments. [0036] FIG. 2A and 2B illustrate example screen displays. [0037] FIG. 3 illustrates concepts underlying an example embodiment. [0038] FIG. 4 illustrates concepts underlying an example embodiment. DETAILED DESCRIPTION

[0039] The present invention relates to digital media capture and processing, and for example is embodied in technology such as devices, systems and methodologies configured to process digital media data. The term "media" as used herein should be afforded a broad definition, to include for example video data, audio data, and video data having an embedded audio data track. The video data may include conventional 2D video, 3D video, 360-degree video, and other forms of video.

Contextual Overview to Embodiments

[0040] Technology embodying various aspects of the intention is described below by reference to systems, devices, and modules.

[0041] The term "module" refers to a software component that is logically separable (a computer program), or a hardware component. The module of the embodiment refers to not only a module in the computer program but also a module in a hardware configuration. The discussion of the embodiment also serves as the discussion of computer programs for causing the modules to function (including a program that causes a computer to execute each step, a program that causes the computer to function as means, and a program that causes the computer to implement each function), and as the discussion of a system and a method. For convenience of explanation, the phrases "stores information," "causes information to be stored," and other phrases equivalent thereto are used. If the embodiment is a computer program, these phrases are intended to express "causes a memory device to store information" or "controls a memory device to cause the memory device to store information." The modules may correspond to the functions in a one-to-one correspondence. In a software implementation, one module may form one program or multiple modules may form one program. One module may form multiple programs. Multiple modules may be executed by a single computer. A single module may be executed by multiple computers in a distributed environment or a parallel environment. One module may include another module. In the discussion that follows, the term "connection" refers to not only a physical connection but also a logical connection (such as an exchange of data, instructions, and data reference relationship). The term "predetermined" means that something is decided in advance of a process of interest. The term "predetermined" is thus intended to refer to something that is decided in advance of a process of interest in the embodiment. Even after a process in the embodiment has started, the term "predetermined" refers to something that is decided in advance of a process of interest depending on a condition or a status of the embodiment at the present point of time or depending on a condition or status heretofore continuing down to the present point of time. If "predetermined values" are plural, the predetermined values may be different from each other, or two or more of the predetermined values (including all the values) may be equal to each other. A statement that "if A, B is to be performed" is intended to mean "that it is determined whether something is A, and that if something is determined as A, an action B is to be carried out". The statement becomes meaningless if the determination as to whether something is A is not performed.

[0042] The term "system" refers to an arrangement where multiple computers, hardware configurations, and devices are interconnected via a communication network (including a one-to-one communication connection). The term "system", and the term "device", also refer to an arrangement that includes a single computer, a hardware configuration, and a device. The system does not include a social system that is a social "arrangement" formulated by humans.

[0043] At each process performed by a module, or at one of the processes performed by a module, information as a process target is read from a memory device, the information is then processed, and the process results are written onto the memory device. A description related to the reading of the information from the memory device prior to the process and the writing of the processed information onto the memory device subsequent to the process may be omitted as appropriate. The memory devices may include a hard disk, a random- access memory (RAM), an external storage medium, a memory device connected via a communication network, and a ledger within a CPU (Central Processing Unit).

Automated Content Trimming based on Intuitive Bookmarking of Media Content via User Interface

[0044] FIG. 1A illustrates a device 100 according to one embodiment (which may, for example, be a smartphone or other device having a screen and onboard camera). Device 100 includes a media capture module 101 (in this case being defined by a digital camera module and microphone module). A user interface control module 103 is configured to cause display, on a digital display provided by device 100, of a user interface that displays video data provided by media capture module. The displayed video data is selectively captured by operation of a capture control module 104, which causes storing of video data in a media storage module 105. Media storage module 105 is configured to store media data in one or more media data files (which in this example include synchronous audio and video data tracks), the files having associated metadata (for example capture time, location, and so on).

[0045] An example screen display 200 caused to be displayed on the digital display of device 100 by module 103 is illustrated in FIG. 2A, which in this example is a touchscreen display. The example screen display includes: a video display object 201 , a bookmark trigger object 202, and a bookmark parameter control object 203. In the example of FIG. 2A, there are a plurality of bookmark parameter control object icons 203a-203c. Each bookmark parameter control objects icon 203a-203c is associated with a respective bookmark parameter, as discussed below. In an alternate embodiment bookmark parameter control object 203 is configured via alternate user interface means to enable user selection of a desired bookmark parameter (for example a scroll device, dropdown menu, and so on).

[0046] In the present embodiment, a user views live or stored video data (live video data is video data that is currently being provided via module 101 , but which has not necessarily been stored to module 105; stored video data is video data read from module 105 or from another source of video data, which may be a local source of a remote network access source, including streamed source). In this regard, control module 103 is configured to operate in a "live" mode or a "review" mode.

[0047] A user selectively interacts with bookmark trigger object 202 (for example via a touch command in the case of a touch screen device or via another form of input) thereby to define a bookmark event. That bookmark event is associated with a time reference defined relative to a media timeline for the video being viewed. The bookmark trigger object is additionally associated with a bookmark parameter, which is defined as a default value or based on user interaction with bookmark parameter control object 203 (in the example of FIG. 2A, this is achieved via a touch interaction with one of bookmark parameter control object icons 203a-203c). [0048] In an example embodiment, control module 103 is responsive to the pressing of trigger object 202 to cause providing of a visual prompt to interact with bookmark parameter control object 203 (for example by causing colouration or flashing of icons 203a-203c), thereby to prompt a user to set a bookmark parameter for a bookmark event triggered by the most recent interaction with trigger object 202. In some embodiments, if no bookmark parameter is set via object 203 within a threshold time window, a default value is defined.

[0049] In the present embodiment, the bookmark parameter is a clip time defined in seconds. For example, icons 203a-203c are optionally representative of parameter values of 15 second, 30 seconds and 60 seconds respectively. These times define a clip length, which is defined from a period of time corresponding to the selected bookmark parameter, terminating at the bookmark event's associated time reference defined relative to the media timeline for the video being viewed. So, for example, if a user presses the bookmark trigger object at a time of 14:56 on the media timeline, and selects a parameter of 30 seconds, a clip start time of 14:26 is defined.

[0050] Module 104 is configured to record data representative of bookmark events via module 105. This may include metadata approaches, and/or media modification based approaches. In relation to the latter, one embodiment causes insertion of a blank media frame at the bookmark event time (and/or at a time preceding the bookmark event time by a period defined by the bookmark parameter). Other methods, including ultrasound or infrasound bookmarking of the audio stream, may also be employed.

[0051] Bookmarks are used thereby to enable trimming of captured media. For example, by defining a bookmark event, device 100 optionally provides functionality that allows instant sharing of a clip defined by that bookmark event (for instance via a social media platform).

[0052] Bookmarks are also optionally used as a means to streamline finer media trimming in post editing. In one embodiment, control module 103 is configured to cause limited playback of media data having predefined bookmarks such that only content preceding a bookmark event by a time period defined by the bookmark parameter is shown. For example, assume there are five bookmark events having bookmark parameters of 60 seconds defined in captured video spanning 3 hours. Control module 103 is configured to cause playback of that captured video defined by a linked progression of five 60-second clips as defined by the bookmark events. Control module also causes on-screen rendering of a start selection object (not shown), which allows a user to define an exact start time for a clip (and hence override the parameter set via object 203). In some cases control object 103 sets the start time a predefined period preceding an observed user touch interaction with the start selection object, thereby to allow for delay in user response time (to prevent having to rewind again). This predefined period is in some embodiments user customisable. Module 104 is operable to record the refined start time associated with a given bookmark event via module 105. An export module 106 is configured to enable exporting of media data based on the bookmark events, for example by:

• Enabling playback and/or searching of bookmark events separated from the remainder of media data;

• Exporting of individual video clips defined by respective bookmark events; and

• Exporting a showreel defined by a sequential merging of the plurality of bookmark events for a given captured stream of media data.

[0053] In an alternate embodiment, when the bookmark trigger object is touched, a section of the video is saved according to the default settings. An animation is played to illustrate that a clip is saved and this animation is minimised into on-screen showreel-clip icons. Each time the bookmark button is touched, a clip is saved, and a showreel-clip icon is added to the screen, showing a showreel queue. The actual bookmark point is also saved as a metadata pointer instead of real trimming of the video. This is configured thereby to allow fast tracing during editing. To refine trimming after all the capture is done, when an object is touched on the reel, the square magnifies to preview the clip. At the same time, the whole preview area becomes an invisible scroll bar for shortening and lengthening the trimming selection. To shorten, scroll across the screen and release the finger. To lengthen, drag the figure near to the left edge of the screen and the video will be extended according. After the desired moment is included in the selection, release the finger to confirm the selection. Generally, only an adjustment of the starting point of the video clip is needed because the key moment endpoint has already been defined by the bookmark. This method allows a user to conduct live editing decisions immediately post-event, rather than having to remember later, and to output the video with clips fully trimmed and auto stitched together. All the metadata pointers bookmarks are passed into a processing unit along with the video resource identifiers. The unit takes the start and stop pointers and edits the video accordingly, preferably through multi-threading background process scheduling. The user is notified to playback up the completion of the video output.

[0054] FIG. 2B illustrates additional example screenshots for further embodiments, including textual context.

[0055] FIG. 3 illustrates example media timelines, showing how use of bookmarks and bookmark parameters enable automated generation of a showreel and automated shortening of viewing times (by skipping between highlights in a media stream using the bookmarks).

Automated Content Processing and Analysis based on Secondary Measurement Stream(s)

[0056] Some embodiments relate to technology whereby secondary measurement data is used to facilitate management of media data. This optionally includes either or both of:

(i) Capture control. A device is configured to capture (i.e. persistently store on a memory module) media data in response to a trigger caused by analysis of the secondary measurement data. For example, a media capture module is operated in an active state whereby it is ready to commence capture (or performs ongoing capture in a temporary buffer, such that in response to a capture trigger a historical portion of video data is able to be stored from the buffer).

(ii) Stored data analysis. A device is configured to access stored captured media data from a memory module, and perform automated operations (for example trimming, extraction, and/or clip merging operations) in respect of the accessed captured media data based on analysis of the secondary measurement data.

[0057] In embodiments described below, the secondary measurement data is representative of physiological and/or psychological measurements, derived from one or more physical sensor units. The sensor units optionally include one or more non-intrusive sensor units configured for measuring physiologic / psychological response levels, and or rates of change of those levels, such as, for example.

• Heart rate

• Breathing rates • Blood pressure

• Pupil Dilation

• Sweat level

• Skin Temperature

• Skin colour

• Hair follicle tightness (goosebumps)

• Blood Oxygen level / blood colour

• Skin conductivity

• Skin capacitive

• Iridological measurements

• Chemical measurements

• Ultrasound measurements

• Electromagnetic spectrum based measurements (X-rays, Inferred)

• Radiation from breaking down of isotopes (detection of iodine isotopes decay)

• Facial Expressions (see https://www.virool.com/eiq)

• Pheromone levels excretions

• Hormone levels

• Other blood chemical and blood compound levels

• Verbal response of viewer (tone, loudness, sound frequency etc.), for example as discussed at https://www.informatik.uni- augsburg.de/lehrstuehle/hcm/projects/tools/emovoice/)

• Other indicators of arousal Etc.

[0058] Intrusive sensors are optionally used, for example to measure (with relatively fast response times) attributes such as:

• Blood glucose levels

• Hormone levels (Adrenaline, Dopamine, Endorphins)

• Blood C02 levels

• Blood oxygen levels

• Blood pressure

• Blood flow measure in volume

• Other blood chemical and blood compound levels

[0059] This allows for the automated capture and/or highlighting of audio and/or video content based on what the user sees and/or hears, and the users dynamic interest level, and/or dynamic emotional state and/or physiological state and/or psychological state at the time.

[0060] FIG. 1 B illustrates a device 1 10 according to one embodiment configured for capture control. Device 1 10 includes a media capture module 11 1 (in this case being defined by a digital camera module and microphone module). A user interface control module 113 is configured to cause display, on a digital display provided by device 110, of a user interface that displays video data provided by media capture module. The displayed video data is selectively captured by operation of a capture control module 1 14, which causes storing of video data in a media storage module 115. Media storage module 1 15 is configured to store media data in one or more media data files (which in this example include synchronous audio and video data tracks), the files having associated metadata (or example capture time, location, and so on).

[0061] Secondary capture input modules 1 16 are configured to receive input data from one or more sensors that provide data such as that described above, and/or from a feed of pre-processed data derived from one or more sensors that provide data such as that described above. This data is optionally received indirectly via one or more intermediary computing systems over a network, or directly. The former is relevant where multiple subjects are monitored, for example actors (or other participants) in a live scene, or live- viewing audience members. In such cases, secondary measurement data is optionally recorded by a plurality of user devices, timestamped at those devices, and communicated to a server device for aggregation and processing thereby to provide aggregated secondary measurement data used for the purpose of analysis.

[0062] Secondary measurement processing modules 117 are configured to process the data received via modules 1 16, thereby to facilitate identification of data attributes. For example, the data attributes may include threshold values and/or rate of change values for one or more measured parameters. A capture trigger module 118 is configured to trigger capturing of media data in response to a set of rules that are tied to the secondary measurements. For example, media capture may be triggered in response to identification that one or a set of measured values exceed a predetermined threshold and/or rate of change. Likewise, media capture may be ended in response to identification that one ore a set of measured values return below the predetermined threshold and/or rate of change (or another predetermined threshold and/or rate of change).

[0063] In this regard, the device of FIG. 1 B is configured such that linking of physiological and/or psychological measurements of an observed subject (or in some cased multiple observed subjects) so that the capture module is set to automatically record only when key measurements are within pre-specified ranges and/or above and/or below key specified ranges. As noted, this optionally includes capturing the moments before measurements are within range (leadup moments) using for example loop recording features. An export module 119 exports media data captured and stored based on the above automated secondary measurement controlled triggering.

[0064] In the context of stored data analysis, embodiments provide technology that enables linking of physiological measurements of a content creator and/or content viewer (and/or one or more other subjects to which sensor units are configured to monitor) in real time to the footage as it is being recorded and/or replayed. This linking is performed, for example, by one or more of the following:

• Setting up one or more file systems (in a media capturing device such as device

1 10, and/or in one or more networked locations that receive secondary measurement data) that provide continually time stamped data matched (or able to be synchronously matched) to the timeline of captured media.

• Encoding and embedding the secondary measurement data as metadata in a multimedia container (which can be, but is not limited to mkv, mp4, m4a, m4v, mov). This data is processed in the case that a media player module is configured to recognise and support the secondary measurement data.

• Subliminal embedding of emotional data within existing audio / visual content: For example: o Ultrasound: Information can be embedded into an existing soundtrack using ultrasound, with different ultrasound frequencies representing different physiological / emotion or response related measurements. AM (amplitude modulation) or FM (frequency modulation) can be applied to each specified frequency to represent the magnitude of the response data in either linear, logarithmic, power law or other scale. o Colour pixels: Information could be embedded in the video stream frames as "data pixels" at any specified location - e.g. on edges, corners, inside a watermark logo, etc. Each colour pixel typically has 8-bit control (currently, would likely be more in future), allowing for up to 256 pixel "states" that can be used to represent measurements variable type and/or magnitude. Utilising multi pixel combinations (e.g. just 3 pixels normally used for the different colours) allows for up to 16.7 million discrete values (28bit), and this is more than enough to sufficiently convey many measurement types and their magnitude values. These 3 pixels could be at any specified corner (e.g. bottom row, bottom right) catering for virtually infinite variations in video frame height and width. o Colour space overlay: this require a template comparison, or a key to extract the information. For example, the human eye is generally unable to detect the differences in RGB (255,255,254) and RGB (255,255,255). The difference in value can be used as coding to embed metadata. o Frame skipping method: in some media encoding method, different types of frames are used for different purpose. A frame of emotion data can be embedded as part of the sequence. Playback and editor software will only decode and display an unknown type of frame and ignore the rest. The encoding as frame can be created to behave this way to support retrospective playback and editing.

[0065] Should there be a desire to hide this data, then alternatives such as LSB substitution may be employed (for example as described at http://www.sersc.Org/journals/IJDTA/vol2_no2/2.pdf).

[0066] Alternatively, standard encryption and decryption techniques can be applied to both discrete digital values as well as more analogue types (e.g. ultrasound, although in digital format this is also digital).

[0067] An advantage of such subliminal embedding is that it survives the audio-video file being cut and/or trimmed and/or stitched, using a conventional existing multimedia editing tool. The subliminal embedding protocol should, in that regard, ideally be designed in a way that the data is losing compression resistance. This allows for media content generated via a device/system that supports embedding of secondary measurement data to be processed via conventional editor software, without losing the secondary measurement data. Hence, functionalities which rely on that secondary measurement data are able to be performed on resultant edited content.

[0068] A combination of both a separate matched data file as well as subliminal encoding is used in some embodiments. This creates redundancy that allows the relation of emotional and other secondary measurement data matched to the audio-video file to survive not only being cut and/or trimmed and/or stitched but also frames being cropped, and audio being overlaid/altered.

[0069] Secondary measurement data is used to establish a foundation for the identification playback of the most impactful content, for example in the context of automated trimming, showreel creation and/or highlight-only playback.

[0070] Those skilled in the art will appreciate that there already exist a range of devices well adapted to provide secondary measurement data as considered herein. For example, the Apple watch is able to measure some of these. There is further the opportunity to create wearables that deliver specific useful measurements where none currently exist. In addition to wearables, other person-response data may be used, for example audio level data such as individual and crowd cheer and applause levels.

[0071] Secondary measurements relating to a plurality of observable physical attributes are recorded and synchronously matched to the video footage in real time, thereby to form a time linked additional metadata set, which may be intelligently, heuristically or otherwise analysed to indicate levels of user interest, which may be further categorised as emotional states such as, for example:

• Elation / Happiness / Delight

• Anticipation

• Astonishment

Uncertainty

Surprise

Sadness

Excitement ,

Fear

Anger

Disgust

Shame

Nausea

Fear

Arousal

Desire

Lust Etc.

[0072] By this, a content creator and/or viewer is enabled (via a user interface, for example a user interface provided via module 103 or module 113) to selectively play and/or trim and/or censor content depending on any of the following filters.

• Interest level/s (Measurements are non-binary)

• Interest type/s

• Available view time

• Footage dates

• Footage times

• Footage locations

• Other data available as standard footage metadata

[0073] Once filters are defined, showreels may be automatically created by stitching together the filtered video segments of interest. Alternatively, specific play protocols may be developed. One example would be to set play speed as inversely proportional to the level of any filter criterion, specifying minimum levels for standard play speed.

[0074] Other minimum filter criteria levels could potentially and automatically invoke other user definable actions such as, for example, action replays and slow-motion replays. For example, a parent recording their child's team playing a soccer match might have their interest raised when the team is on track to potentially score a goal, and interest might peak when goals are actually scored. In an example case, data from wearables is matched to the recorded video file and, on replay, all the segments potentially leading up to goals play at normal speed, while the goals themselves would automatically replay and then slow motion play. More generally, secondary measurement values in a first range are optionally configured to trigger identification of engaging footage that is designated for trimming/playback, and secondary measurement values in a second range are optionally configured to trigger identification of highly engaging footage that is designated for slow motion playback (optionally achieved by time re-mapping of a media timeline portion prior to clip export). [0075] FIG. 4 provides a simplified illustration of how secondary measurement data is used to create various high-impact, shortened duration show reels. The interest level per the Y axis could be the overall aggregate interest level, or any interest level tied to any specified emotional state, psychological, physiological state, or any combination thereof. These emotional, psychological, physiologic states are typically calculated or estimated from measurement data or even observed data.

[0076] Example applications include (but are not limited to):

• Single user / viewer applications

• Sports

• Extreme sports

• Lifestreaming

• Lifeblogging

• Wedding photography

• Photographic surveys

• Travel footage editing

• Go- Pro footage editing

• Aggregated viewer / user applications

• Live shows / concert recording

• Test audiences (previews)

• Pre-editing assessment

• Entertainment website optimisation

Political broadcasting (e.g. Presidential debates)

Youtube, Vimeo, Vevo, dailymotion, zipcast, and similar • Adult video entertainment websites

• Telecommunication with emotion

• Adult entertainment:

• Robotic-enable tele sexual intercourse (https://www.kiiroo.com/teledildonics/ http://www.rollingstone.com/culture/features/how-teledildoni cs-is- revolutionizing-sex-work-w449039)

• Emotion experiencing through neuro-linkage (e.g. A video may show an exciting scene but sends a contradicting emotion through the nerve to create the complex emotion as an art form)

• Artificial intelligent human user interaction training

• Casting / reality Reality TV

• Politics

• Interviews

• Witness statement recording

• Student emotion analysis in study (stressed, boredom, enlighten, etc)

• Patient healthcare

• Self monitoring of wellbeing purposes

• Pre-deployment screening for tasks that require short term focus (pilots, soldier, surgeons, etc)

Example Use Case: Capture Control via Secondary Measurement Data

[0077] The following is a hypothetical use case intended to provide context in relation to application of technology described herein. [0078] John, wears recording glasses on the streets of Los Angeles, and a smartwatch device, which records heart rate, breathing rate, blood colour and skin temperature sensors, to mention but a few. The smartwatch data is processed by software executing on a paired smartphone by an algorithm that determines emotional state and levels based on the smart watch's data stream. Whenever John sees something that moves him, the smart phone causes recording by the glasses, thereby to automatically record the scene, and John has also custom set it to record the few seconds leading up to that (using a looping record feature).

[0079] At the end of the day, or week, John is easily able to recap on those things that interested him the most. It includes, for example.

• The wonderful new architecture in a nearly completed building, leaving him inspired.

• His partner, wearing a new outfit for the first time.

• A near miss when he almost stepped in front of a bus, suddenly increasing both his heart and breathing rate...

• The homeless beggar, who left him feeling sad.

[0080] With smart replay settings, John can also choose not to watch certain event types (e.g. he may choose not to watch footage that caused negative emotional responses).

[0081] The above case could similarly apply in the activation of other devices, such as for example a wearable camera (go-pro etc), a mounted camera (dashcam etc.)

[0082] If the only recording device John has was his mobile device, typically kept in pocket, then that device is optionally configured provide a reminder alert (sound, vibration, flashing light or any combination thereof) at the time to prompt, recording the scene of interest.

[0083] Also, alternatively to a watch as the measurement device, the recording glasses could for example contain pupil dilation monitors to gauge interest response levels.

Example Use Case: Lifestreaming [0084] The following is a hypothetical use case intended to provide context in relation to application of technology described herein.

[0085] Lifestreaming is the act of sharing a user's viewed content (i.e. the user's AV life experience) with others. Currently, the lifestreamer, either

A. Record everything, or

B. Has to consciously remember to selectively record scenes of interest.

[0086] Lifestreamers often use devices such as video glasses, go-pro's, hemispherical and 360-degree cameras. However, smart mobile devices including for example mobile phones and tablet computers could also be used

[0087] In case A, excessive content is recorded, including massive amounts of uninteresting content. This presents significant issues with finding time to review the content. Automatic showreel creation as described herein is extremely useful to trim content before sharing, although the streaming would then be non-live.

[0088] In case B, the user must actively remember to record scenes of interest, ideally including the lead up to those scenes. Technology described herein overcomes that need.

[0089] Research evidence suggest that the best content (content with the potential to go viral if shared) invokes a strong emotive reaction in the viewer - and emotional / phycological states can potentially be measured.

• https://hbr.org/video/4698519638001/why-certain-things-go-vi ral (see effect of phycological response)

• http://www.convinceandconvert.com/content-marketing/4-rules- for-a-video-to- go-viral/

• https://hbr.org/2013/10/research-the-emotions-that-make-mark eting-campaigns- go-viral

Other Simplified Use Case Examples

[0090] Case 1 - The wedding. [0091] The wedding videographer typically takes hours of footage and must reduce the content to meaningful content. With technology described above, the bride and groom are asked to wear wearable devices which record emotional states in real time. Data is then linked to the video data on a time-matching basis.

[0092] The videographer is now potentially able to determine the emotional and psychological response states of the bride and groom, and is better able to create the final cut of the wedding video which will be most meaningful to the couple. The videographer can also create multi length showreels, each guaranteeing the best emotive content for any given showreel length. No other know application or product provides for this.

[0093] The above could of course apply also to live studio recording, event recording, episode footage editing, movie footage editing, documentary footage editing. Emotional / physiological input data could be from live audiences and/or test audiences and/or actors and/or directors and/or producers and/or film crew etc.

[0094] Case 2 - Jake, a GoPro (or similar) user

[0095] Sometime in the future, 2019 for example, Jake wears his GoPro 1 diligently on his adventure tour through the amazon jungle. At the end of his two week escapade, Jake has over 70 hours of footage - he initially planned to edit every night during the trip, but never quite found the time to do so.

[0096] Will Jake ever watch all 70 hours? Possibly, but more often than not, this hardly ever happens in reality. Jake stores all of the footage

[0097] Jake's friends come over two weeks later, and they chat about his trip. Jane wants to know:

• What excited Jake the most?

• What scared Jake the most?

• Did any experience make Jake feel nauseous? A GoPro is a brand of mountable action camera that can capture video in high-definition through ide-angie iens whila being manually remotely controlled or configured to work automatically. • What was the most amazing thing Jake saw on the trip?

• When was Jake happiest? Etc.

[0098] Normally Jake would talk about it while bumbling through the many hours of footage trying to find scenes of relevance, but two weeks later, he can hardly remember in which order he did what.

[0099] Thankfully Jake had bought a software product that provides automated media analysis as described above before the trip. This links to his 2019 model Apple watch (or similar), which in 2019 (hypothetically) includes heart rate, breathing rate, blood colour and skin temperature sensors, to mention but a few. Research outcomes from various university studies (hypothetically) have been used to develop an algorithm that determines emotional states and levels based on a smart watch's data stream, and this has been time-linked to Jake's GoPro footage.

[0100] To answer the question of what excited him the most, the software processes the data to estimate and/or determine the most exciting minutes from all of the selected footage. As an example, Jake chooses a 10 minutes limit for the excitement content, (from over 4,200 minutes of footage), and is now able to playback only those highest impact minutes, (with some lead up and afters footage in each case). Jake's friends are wowed, and Jake is happy to be reliving those excitement highlights.

[0101] Similarly, to answer the question of what scared him the most, or lead to any other emotional state, the technology is used to estimate and/or determine the highlights relevant to that emotional state. It is also important to note that once data is matched to the footage, future methods of analysing and interpreting that data can be used

retrospectively to bring new meaning and insights to older footage.

[0102] Also, the innovation could foreseeably also allow for Boolean type playback / editing / selections: For example:

• Play / edit to the best of what made you happy AND sad.

• Play the best of what made you happy OR sad. • Play the best of what made you happy, BUT NOT excited. Further embodiment

[0103] In a further embodiment, there is more than one multimedia streaming source and multiple receivers for parallel editing purpose.

[0104] One example application is that a device (or devices) transmits a multimedia data stream including metadata from connected and/or device based imaging and/or sounds and/or other sensors.

[0105] Such multimedia and related data can be streamed to receivers (or receiving devices), and these receivers or receiving devices are able to independently,

cooperatively or collaboratively operate (edit, trim, merge, bookmark, auto trim with sensor input, etc) the media source.

[0106] The resulting media data and the metadata can be stored or embedded independently and/or cooperatively and/or collaboratively.

[0107] An example of this in practice could be during live streaming of a sport event"

• Multiple passive editors (pure observers of the video stream) are using a heart rate tracker to automatically insert bookmarks into the video stream.

• At the same time multiple active editors (editors who observe and actively apply operation onto the video stream) use electronic controls (touch control, mouse, keyboard, etc) to edit the video actively or edit based on the bookmarks created automatically.

• This example allows single or multiple people to operate on live video editing passively or actively, individually, cooperatively or collaboratively.

[0108] The resulting media file or files can be shared, also with different level of access permission, for the streamer, the editor and any other designated party. Conclusions and Interpretation

[0109] It will be appreciated that the technology described above provides significant technical advancements in the context of media management, for example in the context of operating hardware devices to capture, process and/or display media content.

[0110] Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as "processing," "computing," "calculating," "determining", analyzing" or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities into other data similarly represented as physical quantities.

[011 1] In a similar manner, the term "processor" may refer to any device or portion of a device that processes electronic data, e.g., from registers and/or memory to transform that electronic data into other electronic data that, e.g., may be stored in registers and/or memory. A "computer" or a "computing machine" or a "computing platform" may include one or more processors.

[0112] The methodologies described herein are, in one embodiment, performable by one or more processors that accept computer-readable (also called machine-readable) code containing a set of instructions that when executed by one or more of the processors carry out at least one of the methods described herein. Any processor capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken are included. Thus, one example is a typical processing system that includes one or more processors. Each processor may include one or more of a CPU, a graphics processing unit, and a programmable DSP unit. The processing system further may include a memory subsystem including main RAM and/or a static RAM, and/or ROM. A bus subsystem may be included for communicating between the components. The processing system further may be a distributed processing system with processors coupled by a network. If the processing system requires a display, such a display may be included, e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT) display. If manual data entry is required, the processing system also includes an input device such as one or more of an alphanumeric input unit such as a keyboard, a pointing control device such as a mouse, and so forth. The term memory unit as used herein, if clear from the context and unless explicitly stated otherwise, also encompasses a storage system such as a disk drive unit. The processing system in some configurations may include a sound output device, and a network interface device. The memory subsystem thus includes a computer-readable carrier medium that carries computer-readable code (e.g., software) including a set of instructions to cause performing, when executed by one or more processors, one of more of the methods described herein. Note that when the method includes several elements, e.g., several steps, no ordering of such elements is implied, unless specifically stated. The software may reside in the hard disk, or may also reside, completely or at least partially, within the RAM and/or within the processor during execution thereof by the computer system. Thus, the memory and the processor also constitute computer-readable carrier medium carrying computer-readable code.

[0113] Furthermore, a computer-readable carrier medium may form, or be included in a computer program product.

[0114] In alternative embodiments, the one or more processors operate as a standalone device or may be connected, e.g., networked to other processor(s), in a networked deployment, the one or more processors may operate in the capacity of a server or a user machine in server-user network environment, or as a peer machine in a peer-to-peer or distributed network environment. The one or more processors may form a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine.

[0115] Note that while diagrams only show a single processor and a single memory that carries the computer-readable code, those in the art will understand that many of the components described above are included, but not explicitly shown or described in order not to obscure the inventive aspect. For example, while only a single machine is illustrated, the term "machine" shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.

[0116] Thus, one embodiment of each of the methods described herein is in the form of a computer-readable carrier medium carrying a set of instructions, e.g., a computer program that is for execution on one or more processors, e.g., one or more processors that are part of web server arrangement. Thus, as will be appreciated by those skilled in the art, embodiments of the present invention may be embodied as a method, an apparatus such as a special purpose apparatus, an apparatus such as a data processing system, or a computer-readable carrier medium, e.g., a computer program product. The computer-readable carrier medium carries computer readable code including a set of instructions that when executed on one or more processors cause the processor or processors to implement a method. Accordingly, aspects of the present invention may take the form of a method, an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of carrier medium (e.g., a computer program product on a computer-readable storage medium) carrying computer-readable program code embodied in the medium.

[0117] The software may further be transmitted or received over a network via a network interface device. While the carrier medium is shown in an exemplary

embodiment to be a single medium, the term "carrier medium" should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term "carrier medium" shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by one or more of the processors and that cause the one or more processors to perform any one or more of the

methodologies of the present invention. A carrier medium may take many forms, including but not limited to, non-volatile media, volatile media, and transmission media. Non-volatile media includes, for example, optical, magnetic disks, and magneto-optical disks. Volatile media includes dynamic memory, such as main memory. Transmission media includes coaxial cables, copper wire and fiber optics, including the wires that comprise a bus subsystem. Transmission media also may also take the form of acoustic or light waves, such as those generated during radio wave and infrared data

communications. For example, the term "carrier medium" shall accordingly be taken to included, but not be limited to, solid-state memories, a computer product embodied in optical and magnetic media; a medium bearing a propagated signal detectable by at least one processor of one or more processors and representing a set of instructions that, when executed, implement a method; and a transmission medium in a network bearing a propagated signal detectable by at least one processor of the one or more processors and representing the set of instructions. [0118] It will be understood that the steps of methods discussed are performed in one embodiment by an appropriate processor (or processors) of a processing (i.e., computer) system executing instructions (computer-readable code) stored in storage. It will also be understood that the invention is not limited to any particular implementation or

programming technique and that the invention may be implemented using any appropriate techniques for implementing the functionality described herein. The invention is not limited to any particular programming language or operating system.

[0119] It should be appreciated that in the above description of exemplary

embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, FIG., or description thereof for the purpose of streamlining the disclosure and aiding in the understanding of one or more of the various inventive aspects. This method of disclosure, however, is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the Detailed Description are hereby expressly incorporated into this Detailed Description, with each claim standing on its own as a separate embodiment of this invention.

[0120] Furthermore, while some embodiments described herein include some but not other features included in other embodiments, combinations of features of different embodiments are meant to be within the scope of the invention, and form different embodiments, as would be understood by those skilled in the art. For example, in the following claims, any of the claimed embodiments can be used in any combination.

[0121] Furthermore, some of the embodiments are described herein as a method or combination of elements of a method that can be implemented by a processor of a computer system or by other means of carrying out the function. Thus, a processor with the necessary instructions for carrying out such a method or element of a method forms a means for carrying out the method or element of a method. Furthermore, an element described herein of an apparatus embodiment is an example of a means for carrying out the function performed by the element for the purpose of carrying out the invention.

[0122] In the description provided herein, numerous specific details are set forth.

However, it is understood that embodiments of the invention may be practiced without these specific details. In other instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.

[0123] Similarly, it is to be noticed that the term coupled, when used in the claims, should not be interpreted as being limited to direct connections only. The terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. Thus, the scope of the expression a device A coupled to a device B should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means. "Coupled" may mean that two or more elements are either in direct physical or electrical contact, or that two or more elements are not in direct contact with each other but yet still co-operate or interact with each other.

[0124] Thus, while there has been described what are believed to be the preferred embodiments of the invention, those skilled in the art will recognize that other and further modifications may be made thereto without departing from the spirit of the invention, and it is intended to claim all such changes and modifications as falling within the scope of the invention. For example, any formulas given above are merely representative of procedures that may be used. Functionality may be added or deleted from the block diagrams and operations may be interchanged among functional blocks. Steps may be added or deleted to methods described within the scope of the present invention.