Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR UPDATING STREAMED CONTENT
Document Type and Number:
WIPO Patent Application WO/2018/224726
Kind Code:
A1
Abstract:
A method including capturing, by a low latency monitoring device (432), content visualized in video rendering mode, capturing at least one parameter modified (474) in the video rendering mode, determining at least one correction update message (474) for modifying the captured content based on the at least one captured parameter modified in the video rendering mode, determining a content production stream based on the captured content, sending the content production stream to a receiver device (440), and sending the at least one correction update message to the receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering (474) of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode.

Inventors:
MATE SUJEET SHYAMSUNDAR (FI)
LEHTINIEMI ARTO (FI)
ERONEN ANTTI (FI)
LEPPÄNEN JUSSI (FI)
Application Number:
PCT/FI2018/050391
Publication Date:
December 13, 2018
Filing Date:
May 24, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
H04N21/2368; G02B27/01; G06F3/01; G06F3/16; G06K9/00; G06T19/00; G11B27/031; H04L29/06; H04N5/247; H04N13/366; H04N21/439; H04S7/00
Domestic Patent References:
WO2016024892A12016-02-18
WO2016164178A12016-10-13
Foreign References:
US9473758B12016-10-18
US9363569B12016-06-07
US20120206452A12012-08-16
US20160088280A12016-03-24
US20150301592A12015-10-22
US20130236040A12013-09-12
US20150208156A12015-07-23
US9473758B12016-10-18
Other References:
See also references of EP 3635961A4
Attorney, Agent or Firm:
NOKIA TECHNOLOGIES OY et al. (FI)
Download PDF:
Claims:
CLAIMS

1. A method comprising:

capturing, by a low latency monitoring device, content visualized in a video rendering mode;

capturing at least one parameter modified in the video rendering mode;

determining at least one correction update message for modifying the captured content based on the at least one captured parameter modified in the video rendering mode;

determining a content production stream based on the captured content;

sending the content production stream to a receiver device; and

sending the at least one correction update message to the receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode .

2. The method as in claim 1, wherein the video rendering mode comprises an augmented reality (AR) mode and capturing content visualized in the video rendering mode further comprises:

capturing the content visualized in the AR mode at a shorter delay than content in the content production stream, wherein the content production stream comprises a virtual reality (VR) production stream.

3. The method as in any of claim 1 or 2, wherein a monitoring function induced delay associated with the low latency monitoring device is masked from the receiver device. 4. The method as in any of claims 1 to 3, wherein a virtual reality (VR) capture workflow associated with the content production stream is parallelized with a content capture curation workflow associated with the at least one correction update message.

5. The method as in claim 4, wherein the VR capture workflow includes at least one of stitching, encoding, and packetization.

6. The method as in any of claims 1 to 5, wherein capturing the at least one parameter modified in the video rendering mode further comprises:

capturing at least one of a parameter associated with an automatic position tracking with manual changes, a parameter associated with a volume of a sound source, and a parameter associated with lighting in a region.

7. The method as in any of claims 1 to 6, wherein the at least one correction update message is included in at least one of a Session Initiation Protocol (SIP) INFO message, real time control transport protocol (RTCP) APP packet, and a separate render update metadata stream.

8. The method as in any of claims 1 to 7, wherein the at least one correction update message includes at least one of a modification parameter, a render-time effect, and a number of units.

9. The method as in any of claims 1 to 8, wherein the receiver device is further configured to continue a modification associated with a previous at least one correction update message in response to a determination that the at least one correction update message has not arrived.

10. The method as in any of claims 1 to 9, wherein sending the at least one correction update message further comprises:

sending the at least one correction update message in response to a request associated with the receiver device.

11. An apparatus comprising: at least one processor; and

at least one non-transitory memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to:

capture content visualized in a video rendering mode;

capture at least one parameter modified in the video rendering mode;

determine at least one correction update message for modifying the captured content based on the at least one captured parameter modified in the video rendering mode;

determine a content production stream based on the captured content;

send the at least one correction update message to a receiver device; and

send the content production stream to the receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode.

12. The apparatus as in claim 11, wherein the video rendering mode comprises an augmented reality (AR) mode and, when capturing content visualized in the video rendering mode, the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:

capture the content visualized in the AR mode at a shorter delay than content in the content production stream, wherein the content production stream comprises a VR production stream.

13. The apparatus as in any of claim 11 or 12, wherein a monitoring function induced delay associated with the apparatus is masked from the receiver device.

14. The apparatus as in any of claims 11 to 13, wherein a virtual reality (VR) capture workflow associated with the content production stream is parallelized with a content capture curation workflow associated with the at least one correction update message.

15. The apparatus as in claim 14, wherein the VR capture workflow includes at least one of stitching, encoding, and packetization.

16. The apparatus as in any of claims 11 to 15, wherein, when capturing the at least one parameter modified in the video rendering mode, the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to:

capture at least one of a parameter associated with an automatic position tracking with manual changes, a parameter associated with a volume of a sound source, and a parameter associated with lighting in a region.

17. The apparatus as in any of claims 11 to 16, wherein the at least one correction update message is included in at least one of a Session Initiation Protocol (SIP) INFO message, real time control transport protocol (RTCP) APP packet, and a separate render update metadata stream.

18. The apparatus as in any of claims 11 to 17, wherein the at least one correction update message includes at least one of a modification parameter, a render-time effect, and a number of units.

19. The apparatus as in any of claims 11 to 18, wherein, when sending the at least one correction update message, the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to: send the at least one correction update message in response to a request associated with the receiver device.

20. A non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising:

capturing, by a low latency monitoring device, content visualized in video rendering mode;

capturing at least one parameter modified in the video rendering mode;

determining at least one correction update message for modifying the captured content based on the at least one captured parameter modified in the video rendering mode;

determining a content production stream based on the captured content;

sending the content production stream to a receiver device; and

sending the at least one correction update message to the receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode .

Description:
Method and Apparatus for Updating Streamed Content

BACKGROUND

Technical Field

The exemplary and non-limiting embodiments relate generally content capture, monitoring and delivery.

Brief Description of Prior Developments

VR capture systems are known to require low latency monitoring to ensure an acceptable quality of the output production stream. Augmented reality (AR) monitoring of VR capture operation is a process which allows immediate feedback about the effects of audio mixing or camera parameter changes. Through an AR device, the audio or video engineer may monitor the capture scene (consisting of sound source information with high-accuracy-indoor-positioning (HAIP) tags, in case of tracked sources) and adjust their properties. The VR production stream may have larger latency than AR monitoring (reaching into multiple seconds, for example) . The VR production stream may continuously deliver content even as the content is being monitored and modified.

SUMMARY

The following summary is merely intended to be exemplary. The summary is not intended to limit the scope of the claims.

In accordance with one aspect, an example method comprises capturing, by an augmented reality (AR) device, content visualized in AR mode, capturing parameters modified in the AR mode, determining at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode, determining a VR content production stream based on the captured content, and sending the at least one correction update message and the content stream to a receiver device, wherein the receiver device is configured to retroactively fix an audio rendering of the content based on aligning the VR content production stream and the captured parameters modified in the AR mode.

In accordance with another aspect, an example apparatus comprises at least one processor; and at least one non- transitory memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to: capture, by an augmented reality (AR) device, content visualized in AR mode, capturing parameters modified in the AR mode, determine at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode, determine a VR content production stream based on the captured content, and send the at least one correction update message and the content stream to a receiver device, wherein the receiver device is configured to retroactively fix an audio rendering of the content based on aligning the VR content production stream and the captured parameters modified in the AR mode.

In accordance with another aspect, an example apparatus comprises a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: capturing, by an augmented reality (AR) device, content visualized in AR mode, capturing parameters modified in the AR mode, determining at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode, determining a VR content production stream based on the captured content, and sending the at least one correction update message and the content stream to a receiver device, wherein the receiver device is configured to retroactively fix an audio rendering of the content based on aligning the VR content production stream and the captured parameters modified in the AR mode. BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings, wherein:

Fig. 1 is a diagram illustrating an event and recording situation;

Fig. 2 is a diagram illustrating an example of an omnidirectional content capture (OCC) device shown in Fig. 1 ; Fig. 3 is a diagram illustrating directions of arrivals (DOA) for the situation shown in Fig. 1 ;

Fig. 4 is a diagram illustrating parallelized workflow;

Fig. 5 is a diagram illustrating some components of an streamed content updating device shown in Fig. 1 and an apparatus for missing audio signals;

Fig. 6 is a diagram illustrating AR mode mixing and VR stream update workflow;

Fig. 7 is a diagram illustrating delay intervals between content capture and content playback;

Fig. 8 is a diagram illustrating a view of the capture scene in AR mode and the VR stream;

Fig. 9 is a diagram illustrating Metadata update implementation embodiment taking into account different playback settings and delivery options;

Fig. 10 is a diagram illustrating a system for updating streamed content; and

Fig. 11 is a diagram illustrating an example method.

DETAILED DESCRIPTION OF EMBODIMENTS

Referring to Fig. 1, there is shown a diagram illustrating an event and recording situation incorporating features of an example embodiment. Although the features will be described with reference to the example embodiments shown in the drawings, it should be understood that features can be embodied in many alternate forms of embodiments. Fig. 1 presents an event which is captured with distributed audio capture set up. The capture set up consists of one or more close-up microphones for sound sources of interest and microphone array (s) to capture the event space.

In contrast with the legacy production techniques with limited field of view (FOV) cameras, omnidirectional cameras are capturing the whole scene. The systems and methods described herein provide flexibility to improve a particular camera view before any changes in the selection of a current camera for transmission are applied.

The example situation of Fig. 1 comprises musical performers 10, 12, 14 located in an event space 16. Some of the performers 10, 12 comprise musical instruments 18, 20. The situation in the event space 16 is configured to be recorded by a device 22. The device 22 preferably comprises at least one microphone array and perhaps also at least one camera for video recording.

As seen in Fig. 1, the user 40 has two headsets; headphones 42 and a virtual reality (VR) or augmented reality (AR) headset 44. In alternate example embodiments, any suitable audio and video devices for the user 40 to hear audio sound and view a virtual reality (VR) or augmented reality (AR) rendering could be provided. The headphones 42 and reality headset 44 could be a single device or multiple devices for example.

The event situation in the event space 16, such as a musical concert for example, may be visually viewed by the user 40 through the visor of the AR monitor 43. Alternatively, if the user 40 is using a VR monitor, the musical concert may be visually viewed by the user 40 on the visor of the VR monitor where the displayed images are captured, such as via the cameras of the device 22 for example. The user 40 may move his/her head to change the field of view he/she is viewing, such as from the performer 10 to the performer 14 for example. Similarly, when the user moves his/her head, the audio signals played on the headphones 42 or speakers 56 may also change to adjust for a more realistic experience. Fig. 2 shows one example of the device 22 wherein the device has multiple cameras 24 and multiple microphones 26 as a unit attached to a tripod 28. One such device is the NOKIA OZO for example. However, in alternate embodiments any suitable recording device or plural devices may be provided for device 22, and device 22 might not comprise a camera.

Fig. 3 is a diagram illustrating directions of arrivals (DOA) for the situation shown in Fig. 1.

The scene may be captured by an omnidirectional camera as omnidirectional video (for example, with a 360 video camera, such as the NOKIA OZO) and may be either monoscopic or stereoscopic. The positions of sound sources and objects of interest may be captured using a positioning technique, such as high-accuracy-indoor-positioning (HAIP) . The audio signals of the sound sources may be captured with microphones external to the omnidirectional camera. The VR capture may be monitored and controlled by devices (and or systems) associated with a production crew. Monitoring of the VR capture may be performed with an AR device to visualize the capture scene and modify the capture parameters in real-time based on, for example, instructions provided by the production crew via the AR device. Examples of capture parameters include volume, position, equalizer (EQ) , etc.

Referring to Fig. 3, audio objects (sound source A, sound source B, sound source C) 62, 64, 66 are shown corresponding to the sound from the performers and instruments 10, 12 and 18, and 14 and 20, respectively, as recorded by the microphone array 68 of the device 22 at a first observation point 70. Ambient audio is also shown as D. Lines 72, 74, 76 illustrate a direction of arrival (DOA) of the sound from the audio objects (A, B, C) 62, 64, 66 to the microphone array 68 at the first observation point 70 at the event space 16. The user 40 is illustrated at location 78 corresponding to a second observation point. Thus, the direction of arrival (DOA) of the sound from the audio objects (A, B, C) 62, 64, 66 is perceived as illustrated by lines 80, 82, 84. In Fig. 5, wherein the location of the user 40 has changed to a new location 78', the direction of arrival (DOA) of the sound from the audio objects (A, B, C) 62, 64, 66 is perceived as illustrated by lines 80', 82', 84'.

Fig. 4 is a diagram illustrating a parallelized workflow 400 corresponding to VR capture workflow and a parallelized content capture curation workflow.

As shown in Fig. 4, the parallelized workflow 400 may include a field of view camera workflow 410, a VR capture workflow 450 and a parallelized content capture curation workflow 470. Parallelized workflow 400 may correspond to (or be implemented for) a live VR production scenario monitored with low-latency AR/VR monitoring/mixing and VR streaming, such as shown above in Fig. 1. The content viewed by the audio or video mixing engineer in the AR or localized VR mode may be less delayed than the content in the VR production stream. The VR production stream with the changes (enacted in the AR mode) may be audible/visible with a delay of multiple seconds.

View camera workflow 410 may include limited field of view multi-camera production techniques. At block 412, a director may receive all (or a plurality of) feeds beforehand (for example, on a monitor screen, now shown in Fig. 4) . The director may preview the scene audio at block 414. Subsequently, at block 416, the director may alert each camera-operator to be ready (for example, camera 1) a predetermined time (for example, a few seconds) before switching to that particular viewpoint. The director may then switch the production view to that particular view (at block 418) . The selected camera and audio feeds may be sent to a production stream for encoding and packetization at block 420. The production stream may be delivered at block 422 and end user playback may occur at block 424. The audio may be static, for example, for a single viewpoint. Consequently, once the audio has been adjusted and tested, the sound scene may be expected to "stay ok" for a subsequent interval (unless there is a significant scene change) . This may be followed by periodic checks to ensure proper audio-visual quality experience .

VR capture workflow 450 may implement VR content production. At block 452, omnidirectional cameras may record video and audio. A selection (for example a few) of listening positions audio may be previewed at block 454. At block 456, a six- degrees-of-freedom (6DoF) volume may be recorded. A 6DoF audio visual scene may be stitched, encoded and packetized at block 458. The production stream may be delivered at block 460. User playback may end (for example, based on user input) at block 462. Due to the complexity of a VR capture scene with multiple dynamic sound sources, unplanned changes may occur in the capturing environment. These changes may be due to the omnidirectional nature of the content. In some instances, content producers may realize that the planned parameters do not suit well for a particular situation. To ensure that the content consumed by the end user is as desired by the content creator, a monitoring function may be inserted in the VR capture workflow 450.

Parallelized content capture curation workflow 470 may include content monitoring at block 472 based on the 6DoF volume. At block 474, correction or curation may be implemented. At block 476, retroactive metadata update packets may be generated. A rendering may be modified for optimal experience at block 478 (without causing additional delays) with user playback ending at block 462.

However, the view camera workflow 410, VR capture workflow 450 and parallelized content capture curation workflow 470 may include characteristics that may result in problems and/or inefficiencies. For example, the large amount of delay in production of content may be unutilized for review. Streaming content update system 30, as shown in Fig. 5, may parallelize the content capture and curation phase in such a way that the monitoring function induced delay is masked from the end user 40.

Referring also to Fig. 5, signals from the device 22 may be transmitted to an apparatus (associated with a streaming content capture and update system) 30, such as to be recorded and updated. The signals may be audio signals or audio/visual signals. The apparatus 30 comprises a controller 32 which includes at least one processor 34 and at least one memory 36 including software or program code 38. The apparatus 30 is configured to be able to parallelize the content capture and curation phase in such a way that the monitoring function induced delay is masked from the end user 40. In this example the user 40 is shown outside the event space 16. However, the user may be located at the event space 16 and may move around the event space (and, in some embodiments, into the event space) .

Apparatus 30 (or associated devices) may implement a process to incorporate low latency monitoring or capture phase changes to the content which has already been transmitted/delivered to the rendering device, but before the content is consumed/viewed/heard by the user 40. Capturing phase change refers to modifications in the capture parameters when the capture is in progress. Examples of phase changes may include changes in the position of an audio source from what it was recorded using a manual or automated tracking method, modifying the gain level of an audio source, etc. Low latency refers to substantially real-time viewing/hearing of VR content. This is relative to the production time for VR content. Apparatus 30 may implement a process to incorporate changes or fixes to the content in the post-production or curation phase before the final delivery or transmission of content. In other words, apparatus 30 may utilize low latency monitoring to effect a change in the rendering before consumption but after encoding. Minimal additional latency may be required to enable the monitoring function. Additional latency may have a deleterious impact on the user experience, when viewed in combination with broadcast content.

As shown in Fig. 5, headphones, such as the headphones 42 reality headset 44 shown in Fig. 1 (not shown in Fig. 5), may be schematically illustrated as being at least part of the augmented reality (AR) monitor 43 which includes a display 46 and a controller 48. The controller 48 comprises at least one processor 50 and at least one memory 52 comprising software or program code 54. The augmented reality (AR) monitor 43 may comprise one or more speakers 56, a head tracker 58 and a user input 60. However, one or more of these could be provided separately . Apparatus 30 may process capture parameter changes to determine the impacted audio objects or sources in the VR capture scene. Apparatus 30 may store the capture parameter change information and the corresponding timestamp (s) or temporal intervals. Apparatus 30 may encode this information as actionable metadata and packetize the information with the corresponding VR stream content index to generate a retroactive correction packet (RCP) . The VR stream may be delayed compared to the capture time, due to the various delays before the content is delivered to the end user 40. The VR stream creation delay may include computational delay, buffering delay in stitching the video captured from individual camera modules in OZO-like devices and encoding the visual content for compression (for example, a codec such as H.264, H.265, defined by the International Telecommunications Union (ITU) etc.) . Apparatus 30 may generate a delivery format (for example, a streaming format compatible with MPEG-DASH (Dynamic Adaptive Streaming over HTTP) , Hypertext transfer protocol (HTTP Live Streaming, Real time transport protocol (RTP) /user datagram protocol (UDP) based streaming or digital video broadcasting second generation terrestrial (DVB-T2 ) /DVB- H (Handheld) /advanced television systems committee (ATSC) broadcast) . The encapsulated RCP may contain the delivery stream compliant timestamp or index information. Apparatus 30 may derive correction update messages for modifying the capture parameters to more closely align or correspond with the content creator' s requirements in order to provide low latency messages to the content consumers' playback device, which retroactively fix the audio rendering. Apparatus 30 may implement this approach that enables leveraging the content capture, stitching and packetization latency in VR production for monitoring as well as curating the content. Apparatus 30 may thereby provide an apparatus and implement a method for masking the content correction time from the end user 40, while ensuring an optimal (from the perspective of the content creator) content consumption experience . In live streaming, content capture operators (such as an audio engineer, lights engineer, video engineer, director, etc.) may perform corrections to the various aspects of recording. For example, the content capture operators may change the automatic position tracking with manual changes, make a certain sound source louder/quieter, increase lighting in some regions, etc. Corresponding parameters, such as a parameter associated with an automatic position tracking with manual changes, a parameter associated with a volume of a sound source, and a parameter associated with lighting in a region, etc., may be determined. In pursuance of these functions, the content capture operator may rewind back slightly, fix an error in positioning, and then communicate the fix to the error forward. The signaling may be done via delivering a correction packet at low latency to the player over a suitable transport (Session Initiation Protocol (SIP) INFO, real time control transport protocol (RTCP) APP packet, or be contained in a separate render update metadata stream.

The signaling message may contain at least one of the following information in addition to the media segment index or the rendering time of the impacted multimedia content. At least one modification parameter (for example, change in position of an audio object, relative gain, etc.), a render- time effect (for example, spatial extent, Room Impulse Response) . The signaling message may contain a number of units (in terms of time, number of protocol data units (PDUs) depending on the delivery format) . For example, in case of RTP streaming, the signaling message may consist of the RTP sequence number or render timestamps in case of MPEG-DASH segments .

A receiver with the end user may align the update packet with the corresponding media rendering segment for processing. Subsequently, a player associated with the end user may interpret and effect the changes specified in the metadata correction packet. In instances in which the correction metadata does not arrive before the corresponding segment, the effect may be continued for the duration of the residual effective period. In other words, the receiver device may be configured to continue a modification associated with a previous correction update message in response to a determination that a current correction update message has not arrived .

In some example embodiments, the player may request a metadata update track over the network while consuming locally stored content .

In yet another example embodiment, the metadata update process may be applied to the broadcast content with a TV or any suitable receiver device performing the render-time correction. Thus, a correction corresponding to the consumed content is made (for example, by an automated process or a content capture operator) , the correction may be stored as special correction metadata with timecode instead of re- encoding the stream.

Referring also to Fig. 6, is a diagram illustrating an AR mode mixing and VR stream update workflow 600. As shown in Fig. 6, AR mode mixing and VR stream update workflow 600 may include a sound engineer component 604 or other control, a Modified AR capture component (MARC) 610, a VR stream modifier (VRSM) 620, a final production stream delivery component (FPSD) 630 and a receiver 640, which may correspond to user 40.

AR mode mixing and VR stream update workflow includes a sound engineer component 604 (or other content capture operator) that may being monitoring the content at 604. As shown in Fig. 6, workflow 600 may be implemented for performing parameter changes and subsequently delivering the reviewed changes as metadata update packets for the delivered VR stream. The content may include sound sources (by way of example, as shown in Fig. 6, a sound source A 62, such as a vocalist 10, and a sound source B 64, such as a guitar 18, selected from among sound sources including sound source D 66) .

The changes performed by the sound engineer component 602 (for example, input by a device associated with a sound engineer) in AR mode may be applied retroactively using the processes for updating the stream described herein above. A temporal window of retroactive changes may depend at least on a stream creation time, transmission and playback delay. In some embodiments, the temporal window of retroactive changes may optionally also depend on the waiting period before delivery.

Modified AR capture 610 (at time T2 (615)) may implement AR mode modifications at time T2 corresponding to [ΔΤ-Τ2, T 2 ] . AR mode changes performed by a crew member (at time T 2 ) (for example, a sound engineer) may be sent to the VR stream modifier (or generator) 620. The delay intervals may be leveraged for performing the corrections before the stream is consumed by the end-user 40.

VR stream modifier 620, at time T M 625, may perform stream modifications after time

T M = T 2 + σ. Final production stream delivery component 630 may perform review (implementing processes determined by apparatus 30) depending on acceptable deferred live delay at time T D 635. The receiver 640 may receive metadata 632 from final production stream delivery component 630, at time T D + RTT/2 + stream buffer time 645, that may correspond to a playback parameter update (PPU) 642, where RTT is the round trip time. Receiver 640 may perform playback (via a playback device (PD) 644) of content 634 received from final production stream delivery component 630.

Referring now to Fig. 7, a diagram 700 illustrating delay intervals between content capture and content playback is shown .

As shown in Fig. 7, different delay intervals may provide upper-bounds for the updates to be created and delivered. The entry point may be T capt ure (710), a time when the content is available or seen by the monitoring user. A retroactive update for addressable content may be sent at time T capt ure to time T D .

In instances of AR monitoring, the expected delay from time Tcapture to T2 615 may be in the range of a maximum few hundred milliseconds. This may be followed by T creat i 0 n 720, which may correspond to a time when a delivery-ready VR stream has been generated after performing (in some instance, all) preceding required steps (for example, stitching, encoding, packetization, etc.) . The time T D 635 is the delivery timestamp when the content delivery starts. The gap between Tcreation 720 and T D 635 may be governed by application specific constraints or regulatory requirements. For example, in the United States, the federal communications commission (FCC) has a regulation of few seconds of deferred live transmission of events. The T p i ayback 730 may correspond to the playback or rendering time. The window for "rewinding" the content may be defined as a sum of the delays (AT C 740, AT W 750, ΔΤ Τ 760) from capture time to the playback time, which are

ΔΤ 0 + ΔΤΜ + ΔΤ Τ ; whe re

Tcreation T ca pture

Tdelivery Tcapture

Tplayback - Tdelivery

The determined parameter ΔΤ, may be signaled to the monitoring engineer/director for updating the stream in a manner which provides smooth glitch-free production and delivery workflow.

Fig. 8 is a diagram 800 illustrating a view of the capture scene in AR mode and the VR stream.

Fig. 8 illustrates AR modifier user views and VR stream impact. The changes performed in AR mode may be signaled to the playback device before the rendering time (leveraging the latency between the capture and consumption time) . Render time options may include a sound source position, volume, reverb, etc. At step 1, AR mode view 810 may present a visualization of a scene including tracked objects 10, 12, 18 with tracker-ID 812, 814 or labels. At time T 2 , as shown in AR Mode view 820, a sound engineer may change capture parameters such as sound level (shown, by way of example, as 822 in AR Mode view 820), position, sound spatial extent, EQ, etc. The performed changes may be indicated without noticeable delay in the AR view.

At step 3, shown in AR mode view 830, the production stream may include updates (if VR user FOV covers the content update), at time T D + T T . The changes may be delivered to a metadata update packet generator, for example associated with apparatus 30. If the VR stream watching user is facing towards (for example, the user's field of view in the HMD is overlapping with the change affected area 650 at time T 2 + At) , the user may experience the updated rendering information instead of a sub-optimal rendering that the user may have experienced if the metadata update packet had not arrived beforehand . Fig. 9 illustrates a situation 900 in which the user specifies their preference to obtain additional metadata update stream.

As shown in Fig. 9, metadata update implementation may incorporate (for example, take into account) different playback settings and delivery options based on input to player settings that instruct the system to use rendering updates if available that are applied via a graphical user interface (GUI), such as GUI 952 of a user device, such device 950, shown in the inset of Fig. 9. The original capture 910 of content may be processed with production stream creation (PSC) 920, which may include Modified AR capture (time T 2 ) , such as described hereinabove. Broadcast/ production stream (BC/PS) delivery 930 may transmit metadata 632 and/or content 634 via a unicast or multicast transmission and broadcast 950 (for example, IP unicast + Broadcast (DVB, DAB) Or IP unicast + IP Broadcast or IP multicast + any broadcast) . A receiver 640 (for example, a device such as device 950, shown in inset of Fig. 9) may play the content with corresponding playback parameter updates.

Fig. 10 illustrates a simplified system overview 1000 for implementing updating of the stream.

The VR capture (VRC) module may receive the content from microphones and omnidirectional video cameras (1002) . At step 1, AR Mode changes may be implemented by a sound engineer (1004) via an SE AR Mode Monitor (SEARMM) (1006) . The AR mode monitor may perform (execute the process) with low latency to provide a real-time overview of the capture scene to the user (for example, a sound engineer) . The monitoring user may perform (or input) determined (or required/necessary) changes which may be sent, for example, via production stream modifiers signal at step 2, to the VR production/broadcast stream delivery module (VRP/BC) 1020 (which may be responsible for generating the production VR stream, for example, in conjunction with VR stream generator (VRSG) 1008) . VR production/broadcast stream delivery module 1020 may also receive (for example, via a sound engineer) workflow messaging (SE WFM) 1010.

VR production/broadcast stream delivery module 1020 (for example, with VR stream generator 1008) may be correction- metadata-capable. VR stream generator 1008 may take the capture parameter change information which have been transformed to obtain the correct adaptation corresponding to the desired change (for example, the position axis of sound objects may be aligned if the position axis is different from the AR mode coordinate axis), and encapsulate the data 1024 into the delivery compliant format at step 3 (for example, in case of RTP use sequence numbers and MPEG-DASH utilize the segment presentation timestamps) . VR production/broadcast stream delivery module 1020 may output the metadata 632, and, in some instances content 634, to receivers associated with end users 40.

As shown with regard to VR production/broadcast stream delivery module 1020-1, VR production/broadcast stream delivery component 1020 may include a retroactive playback update component 1032. VR production/broadcast stream delivery module 1020-1 may receive production stream modifiers signal 1012, as shown at step 2, from a sound engineer (1006) and perform modifier signals adaptation (for example, via modifier signals adaptation module (MSA) 1034), modifier signals encapsulation (for example, via modifier signals encapsulation module (MSE) 1036) , and modifier signal delivery scheduling (for example, via modifier signal delivery scheduling module (MSDS) 1038). Delivery and consumption format adapted update metadata (such as metadata 632) may be output at step 3. The captured content may be processed and output (1044) via conventional broadcast streaming delivery (1040) . According to an example embodiment, at step 4, VR production/broadcast stream delivery module 1020 may output the formatted metadata and content to a receiver 640, such as described hereinabove with respect to Fig. 6. Finally the receiver 640 may schedule the metadata update after taking into account when the stream delivery will start and also an expected RTT.

Fig. 11 is an example flow diagram illustrating a process 1100 of updating streaming content. Process 1100 may be performed by a device (or devices) associated with a sound engineer, including, for example, streaming content update system 30 and/or AR monitor 43. At block 1110, streaming content update system 30 may capture content visualized in augmented reality (AR) mode.

At block 1120, streaming content update system 30 may capture parameters modified in the AR mode. For example, the sound engineer may modify parameters in the AR mode. Streaming content update system 30 may capture the resulting modifications .

At block 1130, streaming content update system 30 may determine at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode.

At block 1140, streaming content update system 30, or for example, an associated rendering system, may determine a content stream based on the captured content. The content stream or production stream may be a final stream ready for delivery to a device associated with the end user. At block 1150, streaming content update system 30 may send the at least one correction update message and the content stream to a receiver device. Streaming content update system 30 may store and/or share, to make the retroactive fix available to the end user consumption device. The receiver device may be configured to align the content stream and the captured parameters modified in the AR mode. The system may thereby modify a stream which is produced with relatively high latency compared to the monitoring latency. This modified stream may be the final stream delivered to the end user.

Features as described herein allow for utilizing a delay in VR content production compared to limited FOV content delivery for updating metadata for a VR rendering. The delay in VR content production may be required for live events (for example, contemporaneousness or currency of the VR content may make the live broadcast stream suitable for consumption for the end users) . The currency of the VR stream with other forms of content that may be available may be a (in some instances, crucial) business driver (for example, a VR stream provided as a second screen content for conventional primary broadcast content) . The streaming content update system 30 may provide technical advantages and/or enhance the end-user experience. For example, the streaming content update system 30 may mask a content capture phase monitoring and fixing process by parallelizing the workflow such that VR stream delivery is not unnecessarily delayed.

Another benefit of streaming content update system 30 is to allow for modifying and fixing capture phase issues, which may be important for complex distributed audio capture scene with multiple dynamic sound sources of interest.

Features as described herein may plug in with the existing VR content delivery systems (for example, the streaming content update system 30 may be an add-on to a VR streaming content delivery system) .

Another benefit of streaming content update system 30 and the methods of updating VR streaming content described hereinabove is to provide a work-around from re-encoding by leveraging object based distributed audio capture delivery. This refers to utilizing object based capture of audio and delivering individual audio objects as different audio streams. This may make it easier to modify the individual audio source rendering properties at playback time.

An example method may comprise capturing, by a low latency monitoring device, content visualized in video rendering mode, capturing at least one parameter modified in the video rendering mode, determining at least one correction update message for modifying the captured content based on the at least one captured parameter modified in the video rendering mode, determining a content production stream based on the captured content, sending the content production stream to a receiver device, and sending the at least one correction update message to the receiver device is configured to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode .

The method, wherein the video rendering mode comprises an augmented reality (AR) mode and capturing content visualized in the video rendering mode further comprises capturing the content visualized in the AR mode at a shorter delay than content in the content production stream, wherein the content production stream comprises a virtual reality (VR) production stream.

The method may include a monitoring function induced delay associated with the low latency monitoring device is masked from the receiver device. The method may include a VR capture workflow associated with the VR content production stream that is parallelized with a content capture curation workflow associated with the at least one correction update message. The method wherein VR capture workflow includes at least one of stitching, encoding, and packetization.

The method may include, when capturing the at least one parameter modified in the video rendering mode further comprises capturing at least one of a parameter associated with an automatic position tracking with manual changes, a parameter associated with a volume of a sound source, and a parameter associated with lighting in a region.

The method wherein the at least one correction update message is included in at least one of a Session Initiation Protocol (SIP) INFO message, real time control transport protocol (RTCP) APP packet, and a separate render update metadata stream.

The method wherein the at least one correction update message includes at least one of a modification parameter, a render- time effect, and a number of units.

The method wherein the receiver device is further configured to continue a modification associated with a previous at least one correction update message in response to a determination that the at least one correction update message has not arrived.

The method wherein sending the at least one correction update message further comprises sending the at least one correction update message in response to a request associated with the receiver device.

An example apparatus may comprise at least one processor; and at least one non-transitory memory including computer program code, the at least one memory and the computer program code configured to, with the at least one processor, cause the apparatus to: capture, by an augmented reality (AR) device, content visualized in AR mode, capture parameters modified in the AR mode, determine at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode, determine a VR content production stream based on the captured content, and send the at least one correction update message and the content stream to a receiver device, wherein the receiver device is configured to retroactively fix an audio rendering of the content based on aligning the VR content production stream and the captured parameters modified in the AR mode. The apparatus wherein, when capturing content visualized in the AR mode, the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to capture the content visualized in the AR mode at a shorter delay than content in a virtual reality (VR) production stream.

The apparatus wherein a monitoring function induced delay associated with the apparatus is masked from the receiver device .

The apparatus wherein a VR capture workflow associated with the VR content production stream is parallelized with a content capture curation workflow associated with the at least one correction update message.

The apparatus wherein the VR capture workflow includes at least one of stitching, encoding, and packetization .

The apparatus wherein, when capturing parameters modified in the AR mode, the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to capture at least one of a parameter associated with an automatic position tracking with manual changes, a parameter associated with a volume of a sound source, and a parameter associated with lighting in a region.

The apparatus wherein the at least one correction update message is included in at least one of a Session Initiation Protocol (SIP) INFO message, real time control transport protocol (RTCP) APP packet, and a separate render update metadata stream. The apparatus wherein the at least one correction update message includes at least one of a modification parameter, a render-time effect, and a number of units.

The apparatus wherein, when sending the at least one correction update message, the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus to send the at least one correction update message in response to a request associated with the receiver device.

An example apparatus may be provided in a non-transitory program storage device, such as memory 52 shown in Fig. 3 for example, readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: capturing, by an augmented reality (AR) device, content visualized in AR mode, capturing parameters modified in the AR mode, determining at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode, determining a VR content production stream based on the captured content, and sending the at least one correction update message and the content stream to a receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode.

In accordance with another example, an example apparatus comprises: means for capturing, by an augmented reality (AR) device, content visualized in AR mode, means for capturing parameters modified in the AR mode, determining at least one correction update message for modifying the capture parameters based on the captured parameters modified in the AR mode, means for determining a VR content production stream based on the captured content, and means for sending the at least one correction update message and the content stream to a receiver device, wherein the at least one correction update message is configured to be used by the receiver device to retroactively fix an audio rendering of the captured content based on aligning the content production stream and the at least one captured parameter modified in the video rendering mode.

Any combination of one or more computer readable medium (s) may be utilized as the memory. The computer readable medium may be a computer readable signal medium or a non-transitory computer readable storage medium. A non-transitory computer readable storage medium does not include propagating signals and may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM) , a read-only memory (ROM) , an erasable programmable read-only memory (EPROM or Flash memory) , an optical fiber, a portable compact disc read-only memory (CD- ROM) , an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.

It should be understood that the foregoing description is only illustrative. Various alternatives and modifications can be devised by those skilled in the art. For example, features recited in the various dependent claims could be combined with each other in any suitable combination ( s ) . In addition, features from different embodiments described above could be selectively combined into a new embodiment. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims.