Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
OBJECT CLUSTERING FOR RENDERING OBJECT-BASED AUDIO CONTENT BASED ON PERCEPTUAL CRITERIA
Document Type and Number:
WIPO Patent Application WO/2014/099285
Kind Code:
A1
Abstract:
Embodiments are directed a method of rendering object-based audio comprising determining an initial spatial position of objects having object audio data and associated metadata, determining a perceptual importance of the objects, and grouping the audio objects into a number of clusters based on the determined perceptual importance of the objects, such that a spatial error caused by moving an object from an initial spatial position to a second spatial position in a cluster is minimized for objects with a relatively high perceptual importance. The perceptual importance is based at least in part by a partial loudness of an object and content semantics of the object.

Inventors:
CROCKETT BRETT G (US)
SEEFELDT ALAN J (US)
TSINGOS NICOLAS R (US)
WILSON RHONDA (US)
BREEBAART DIRK JEROEN (AU)
LU LIE (CN)
CHEN LIANWU (CN)
Application Number:
PCT/US2013/071679
Publication Date:
June 26, 2014
Filing Date:
November 25, 2013
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DOLBY LAB LICENSING CORP (US)
International Classes:
G10L19/008; G10L19/20; H04S7/00
Foreign References:
US20050114121A12005-05-26
Other References:
TSINGOS N ET AL: "Perceptual audio rendering of complex virtual environments", ACM TRANSACTIONS ON GRAPHICS (TOG), ACM, US, vol. 23, no. 3, 1 August 2004 (2004-08-01), pages 249 - 258, XP002453152, ISSN: 0730-0301, DOI: 10.1145/1015706.1015710
"Dolby Atmos Next-Generation Audio for Cinema", 1 April 2012 (2012-04-01), XP055067682, Retrieved from the Internet [retrieved on 20130621]
Attorney, Agent or Firm:
DOLBY LABORATORIES, INC. et al. (100 Potrero AvenueSan Francisco, California, US)
Download PDF:
Claims:
CLAIMS:

What is claimed is:

1 . A method of compressing object-based audio data comprising:

identifying a first plurality of fixed channels and audio objects to be rendered,

5 each audio object comprising audio data and associated metadata;

defining an error threshold for one or more parameters encoded within the associated metadata for each audio object; and

grouping audio objects of the first plurality of audio objects into a reduced plurality of fixed channels or audio objects based on the error threshold.

o

2. The method of claim 1 wherein the one or more parameters comprise at least one or more of position, width, loudness, content type, and rendering mode.

3. The method of claim 2 wherein grouping audio objects comprises combining5 audio objects into a plurality of clusters comprising the reduced plurality of audio objects, each cluster comprising constituent audio objects that are associated with other audio objects within each cluster such that the error threshold for each parameter of the one or more parameters for each error threshold is not exceeded due to a change in the one or more parameters caused by the combining of each0 respective audio object into a respective cluster.

4. The method of claim 2 wherein the reduced plurality of audio objects comprises one or more clusters of constituent objects, and wherein grouping constituent objects into respective clusters comprises assigning audio objects that5 are spatially proximate to other objects within an error threshold associated with a position parameter of each respective audio object to be part of a same cluster.

5. The method of claim 4 wherein the grouping audio objects comprises:

combining waveforms embodying the audio data for the constituent objects0 within the same cluster together to form a replacement object having a combined waveform of the constituent objects; and combining the metadata for the constituent objects within the same cluster together to form a replacement set of metadata for the constituent objects.

6. The method of claim 3 wherein the act of grouping audio objects further comprises distributing at least one object into two or more clusters of the plurality of clusters.

7. The method of claim 6 wherein the metadata for the constituent objects are combined using one of a combinatorial or selection operation that depends on the type of metadata being combined.

8. The method of claim 7 wherein the replacement set of metadata for position of the replacement object is derived by taking an average of each of the positions of the constituent objects, the replacement set of metadata for width of the replacement object is derived by taking a combination of the width of each of the constituent objects, the replacement set of metadata for loudness of the replacement object is derived by taking a combination of the loudness of the constituent objects; the replacement set of metadata for content type of the replacement object is derived by selecting a content type of a dominant constituent object; and the replacement set of metadata for rendering mode of the replacement object is derived by selecting a rendering mode of the dominant constituent object.

9. The method of claim 8 wherein the dominant constituent object comprises a constituent object that has a greater specific loudness relative to other constituent objects within the same cluster, and wherein the specific loudness comprises a perceptual measure of loudness based on psychoacoustic theory.

10. The method of claim 3 wherein the error threshold comprises a maximum amount of deviation of a respective metadata value due to grouping of the audio object with one or more other objects that causes excessive distortion of the corresponding audio object when it is rendered in the playback system.

1 1 . The method of claim 10 wherein the error threshold is represented by an error metric that is expressed as a function of a distance of at least one constituent object within a cluster to a center point of the cluster, and an importance of the constituent object relative to other constituent objects or beds in the cluster.

12. The method of claim 1 1 wherein the importance of the constituent object is a function of one of a relative energy or loudness of the constituent object within the cluster.

13. The method of claim 3 wherein at least one of the clusters of the plurality of clusters is a reduced bit-depth cluster that is encoded using a smaller number of bits than the lowest bit depth of the audio objects.

14. The method of claim 3 wherein the plurality of clusters is time-variant based on one of a periodic time interval and an event-driven schedule, and wherein the event-driven schedule comprises a start point and stop point of each constituent object of the cluster with respect to defined object boundaries for each constituent object.

15. The method of claim 14 wherein the start point and stop point of each constituent object is determined using one of a loudness level or an energy level of the constituent object.

16. A method of processing object-based audio comprising:

identifying a spatial location of each object of a plurality of objects at defined time intervals; and

grouping objects of the plurality of objects into one or more time-varying clusters based on a maximum distance between pairs of objects.

17. The method of claim 16 wherein the defined time intervals comprise one of: regular time periods and an event-driven schedule defined by object boundaries for each object of the plurality of objects.

18. The method of claim 17 wherein the object boundaries for each object comprise start point and stop points as determined using one of a loudness level or an energy level of the respective object.

19. The method of claim 18 wherein each object is an adaptive audio object comprising a waveform element representing audio data and a metadata element encoding one or more characteristics of the object.

20. The method of claim 19 wherein the one or more characteristics comprise at least one or more of position, width, loudness, content type, and rendering mode of the object as it is rendered in the playback system. 21 . The method of claim 19 wherein grouping objects of the plurality of objects comprises:

combining the waveforms for each of the objects to form a combined waveform for a clustered object; and

combining the metadata elements for each of the objects to form a combined metadata element for the clustered object.

22. The method of claim 21 further comprising:

defining a cluster distortion error associated with each characteristic;

determining whether grouping an object with one or more other objects in a cluster causes a change in a characteristic of the object that exceeds the respective cluster distortion error; and

de-clustering the object from the one or more other objects in the cluster at a next time interval. 23. The method of claim 21 wherein the combined metadata are formed using one of a combinatorial or selection operation that depends on the characteristic of the clustered object.

24. The method of claim 23 wherein the combined metadata for position of the clustered object is derived by taking an average of each of the positions of the objects, the combined metadata for width of the clustered object is derived by taking an average of the width of each of the objects, the combined metadata for loudness of the clustered object is derived by taking an average of the loudness of the objects; the combined metadata for content type of the clustered object is derived by selecting a content type of a dominant object in the cluster; and the combined metadata for rendering mode of the clustered object is derived by selecting a rendering mode of the dominant object. 25. The method of claim 24 wherein the dominant object comprises an object that has a greater specific loudness relative to other objects within the cluster, and wherein the specific loudness comprises a perceptual measure of loudness based on psychoacoustic theory. 26. The method of claim 16 further comprising transmitting the one or more object clusters from an encoder stage to a decoder stage in a codec circuit for rendering through a playback system.

27. A system for rendering object-based audio comprising:

a first rendering component identifying a spatial location of each object of a plurality of objects at defined time intervals; and

a clustering component grouping objects of the plurality of objects into one or more time-varying clusters based on a maximum distance between pairs of objects. 28. The system of claim 27 wherein the defined time intervals comprise one of: regular time periods and an event-driven schedule defined by object boundaries for each object of the plurality of objects, wherein the object boundaries for each object comprise start point and stop points as determined using one of a loudness level or an energy level of the respective object.

29. The system of claim 28 wherein each object is an adaptive audio object comprising a waveform element representing audio data and a metadata element encoding one or more characteristics of the object, and wherein the one or more characteristics comprise at least one or more of position, width, loudness, content type, and rendering mode of the object as it is rendered in the playback system.

30. The system of claim 29 wherein grouping objects of the plurality of objects comprises: summing the waveforms for each of the objects to form a summed waveform for a clustered object; and

combining the metadata elements for each of the objects to form a combined metadata element for the clustered object.

31 . The system of claim 30 wherein the combined metadata are formed using one of a combinatorial or selection operation that depends on the characteristic of the clustered object, and wherein the combined metadata for position of the clustered object is derived by taking an average of each of the positions of the objects, the combined metadata for width of the clustered object is derived by taking an average of the width of each of the objects, the combined metadata for loudness of the clustered object is derived by taking an average of the loudness of the objects; the combined metadata for content type of the clustered object is derived by selecting a content type of a dominant object in the cluster; and the combined metadata for rendering mode of the clustered object is derived by selecting a rendering mode of the dominant object.

32. The system of claim 30 wherein at least some of the objects comprise channel beds for transmitting audio content over speaker feeds to specific drivers arranged in a surround-sound configuration, and at least some other of the objects comprise object-based static or dynamic audio objects that have associated parametric source description including apparent three-dimensional source position and apparent source width. 33. The system of claim 32 further comprising a second rendering component detecting one or more playback characteristics of an end-point device of the playback system that outputs the audio content to a user.

34. The system of claim 33 wherein the playback characteristics of the end-point device are used by the clustering component to modify the grouping of objects.

35. The system of claim 34 wherein the end-point device is selected from a group consisting of: digital media disc player, home theater system, soundbar, personal music device, and cinema sound system.

36. A method of compressing object-based audio data comprising: determining a perceptual importance of objects in an audio scene, wherein the objects comprise object audio data and associated metadata; and

combining certain audio objects into clusters of audio objects based on the determined perceptual importance of the objects, wherein a number of clusters is less than an original number of objects in the audio scene.

37. The method of claim 36 wherein the perceptual importance is derived from the object audio data of the objects.

38. The method of claim 36 wherein the perceptual importance is a value derived from at least one of a loudness value and a content type of a respective object, and wherein the content type is selected from the group consisting of: dialog, music, sound effects, ambiance, and noise.

39. The method of claim 36 wherein the determined perceptual importance of the objects depends on a relative spatial location of the objects in the audio scene, and wherein the step of combining comprises:

determining a number of centroids, each centroid comprising a center of a cluster for grouping a plurality of audio objects, the centroid positions being dependent on the perceptual importance of one or more audio objects relative to other audio objects; and

grouping the objects into one or more clusters by distributing object signals across the clusters.

40. The method of claim 38 wherein cluster metadata is determined by one or more objects of a high perceptual importance. 41 . The method of claim 40 wherein the content type is determined by an audio classification process, and wherein the loudness value is obtained by a perceptual model.

42. The method of claim 41 wherein the perceptual model is based on a calculation of excitation levels in critical frequency bands of the input audio signal, and wherein the method further comprises:

defining a centroid for a cluster around a first object of the audio objects; and aggregating all excitations of the audio objects.

43. The method of claim 38 wherein the loudness value is dependent at least in part on spatial proximity of a respective object to the other objects. 44. The method of claim 43 wherein the spatial proximity is defined at least in part by a position metadata value of the associated metadata for the respective object.

45. The method of claim 36 wherein the combining causes certain spatial errors associated with each clustered object, and further wherein the method further comprises clustering the objects such that a spatial error is minimized for objects of relatively high perceptual importance.

46. The method of claim 45 further comprising one of: selecting the object having the highest perceptual importance as a cluster centroid for a cluster containing the object having the highest perceptual importance, or selecting an object that has a maximum loudness as a cluster centroid for a cluster containing the object that has the maximum loudness.

47. The method of claim 46 wherein clustering further comprises one of: grouping an object with a nearest neighbor, or distributing an object over one or more clusters using a panning method.

48. The method of claim 42 further comprising smoothing the excitation levels, the loudness or properties derived thereof based on a time constant derived by a relative perceptual importance of a grouped object.

49. A method of processing object-based audio comprising:

determining a first spatial location of each object relative to the other objects of the plurality of audio objects; determining a relative importance of each audio object of the plurality of audio objects, said relative importance depending on the relative spatial locations of objects;

determining a number of centroids, each centroid comprising a center of a cluster for grouping a plurality of audio objects, the centroid positions being dependent on the relative importance of one or more audio objects; and

grouping the objects into one or more clusters by distributing object signals across the clusters. 50. The method of claim 49 further comprising determining a partial loudness of each audio object of the plurality of audio objects, wherein the partial loudness of an object is based at least in part on a masking effect of one or more other objects.

51 . The method of claim 49 further comprising determining a content type and associated content type importance of each audio object of the plurality of audio objects.

52. The method of claim 49 further comprising combining the partial loudness and the content type of each audio object to determine the relative importance of a respective audio object.

53. The method of claim 52 wherein the content type is selected from the group consisting of: dialog, music, sound effects, ambiance, and noise. 54. The method of claim 50 wherein the partial loudness is obtained by a perceptual model that is based on a calculation of excitation levels in critical frequency bands of the input audio signal, and wherein the method further comprises:

defining a centroid for a cluster around a first object of the audio objects; and aggregating all excitations of the audio objects.

55. The method of claim 49 wherein grouping the objects causes certain spatial errors associated with each clustered object, and further wherein the method further comprises grouping the objects such that a spatial error is minimized for objects of relatively high perceptual importance.

56. The method of claim 55 further comprising one of: selecting the object having the highest perceptual importance as a cluster centroid for a cluster containing the object having the highest perceptual importance, or selecting an object that has a maximum loudness as a cluster centroid for a cluster containing the object that has the maximum loudness. 57. The method of claim 49 wherein grouping the audio objects comprises:

combining waveforms embodying the audio data for the constituent objects within the same cluster together to form a replacement object having a combined waveform of the constituent objects; and

combining the metadata for the constituent objects within the same cluster together to form a replacement set of metadata for the constituent objects.

58. A system for rendering object-based audio comprising:

a first rendering component determining a perceptual importance of objects in an audio scene, wherein the objects comprise object audio data and associated metadata; and

a clustering combining certain audio objects into clusters of audio objects based on the determined perceptual importance of the objects, wherein a number of clusters is less than an original number of objects in the audio scene. 59. The system of claim 58 wherein the perceptual importance is derived from the object audio data of the objects.

60. The system of claim 58 wherein the perceptual importance is a value derived from at least one of a loudness value and a content type of the respective object, and wherein the content type is selected from the group consisting of: dialog, music, sound effects, ambiance, and noise.

61 . The system of claim 60 further comprising an audio classification component determining the content type, the audio classification component comprising: an interface receiving an input audio signal for the audio objects;

a feature extraction module coupled to the interface and extracting features representing at least one of the temporal, spectral, and spatial property of the input audio signal;

a data store coupled to the feature extraction module and storing a set of pre- trained models representing a statistical property of each target audio type corresponding to the content type group; and

a comparator coupled to the feature extraction module and comparing the extracted features of the input audio signal with the model of each target audio type to compute a confidence score for each target audio type, and estimating a best match audio type based on the confidence score for each target audio type.

62. The system of claim 60 wherein the loudness value is obtained by a perceptual model based on a calculation of excitation levels in critical frequency bands of the input audio signal, and wherein the system is configured to define a centroid for a cluster around a first object of the audio objects and aggregate all excitation of the objects.

63. The system of claim 58 further comprising a second rendering component configured to combine waveforms embodying the audio data for the constituent objects within the same cluster together to form a replacement object having a combined waveform of the constituent objects, and combine the metadata for the constituent objects within the same cluster together to form a replacement set of metadata for the constituent objects.

Description:
OBJECT CLUSTERING FOR RENDERING OBJECT-BASED

AUDIO CONTENT BASED ON PERCEPTUAL CRITERIA

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] This application claims the benefit of priority to United States Provisional Patent Application No. 61 /745,401 filed 21 December 2012 and United States Provisional Application No. 61 /865,072 filed 12 August 201 3, hereby incorporated by reference in entirety.

TECHNICAL FIELD OF THE INVENTION

[0002] One or more embodiments relate generally to audio signal processing, and more specifically to clustering audio objects based on perceptual criteria to compress object-based audio data for efficient coding and/or rendering through various playback systems.

BACKGROUND OF THE INVENTION

[0003] The advent of object-based audio has significantly increased the amount of audio data and the complexity of rendering this data within high-end playback systems. For example, cinema sound tracks may comprise many different sound elements corresponding to images on the screen, dialog, noises, and sound effects that emanate from different places on the screen and combine with background music and ambient effects to create the overall auditory experience. Accurate playback requires that sounds be reproduced in a way that corresponds as closely as possible to what is shown on screen with respect to sound source position, intensity, movement, and depth. Object-based audio represents a significant improvement over traditional channel-based audio systems that send audio content in the form of speaker feeds to individual speakers in a listening environment, and are thus relatively limited with respect to spatial playback of specific audio objects.

[0004] The introduction of digital cinema and the development of three- dimensional ("3D") content has created new standards for sound, such as the incorporation of multiple channels of audio to allow for greater creativity for content creators, and a more enveloping and realistic auditory experience for audiences. Expanding beyond traditional speaker feeds and channel-based audio as a means for distributing spatial audio is critical, and there has been considerable interest in a model-based audio description that allows the listener to select a desired playback configuration with the audio rendered specifically for their chosen configuration. The spatial presentation of sound utilizes audio objects, which are audio signals with associated parametric source descriptions of apparent source position (e.g., 3D coordinates), apparent source width, and other parameters. Further advancements include a next generation spatial audio (also referred to as "adaptive audio") format has been developed that comprises a mix of audio objects and traditional channel- based speaker feeds (beds) along with positional metadata for the audio objects.

[0005] In some soundtracks, there may be several (e.g., 7, 9, or 1 1 ) bed channels containing audio. Additionally, based on the capabilities of an authoring system there may be tens or even hundreds of individual audio objects that are combined during rendering to create a spatially diverse and immersive audio experience. In some distribution and transmission systems, there may be large enough available bandwidth to transmit all audio bed and objects with little or no audio compression. In some cases, however, such as Blu-ray disc, broadcast (cable, satellite and terrestrial), mobile (3G and 4G) and over-the-top (OTT, or Internet) distribution there may be significant limitations on the available bandwidth to digitally transmit all of the bed and object information created at the time of authoring. While audio coding methods (lossy or lossless) may be applied to the audio to reduce the required bandwidth, audio coding may not be sufficient to reduce the bandwidth required to transmit the audio, particularly over very limited networks such as mobile 3G and 4G networks.

[0006] Some prior methods have been developed to reduce the number of input objects and beds into a smaller set of output objects by means of clustering.

Essentially, objects with similar spatial or rendering attributes are combined into single or fewer new, merged objects. The merging process encompasses combining the audio signals (for example by summation) and the parametric source

descriptions (for example by averaging). The allocation of objects to clusters in these previous methods is based on spatial proximity. That is, objects that have similar parametric position data are combined into one cluster while ensuring a small spatial error for each object individually. This process is generally effective as long as the spatial positions of all perceptually relevant objects in the content allow for such clustering with reasonably small error. In very complex content, however, with many objects active simultaneously having a sparse spatial distribution, the number of required output clusters to accurately model such content can become significant when only moderate spatial errors are tolerated. Alternatively, if the number of output clusters is restricted, such as due to bandwidth or complexity constraints, complex content may be reproduced with a degraded spatial quality due to the constrained clustering process and the significant spatial errors. Hence in that case, the use of proximity only to define the clusters often returns suboptimal results. In this case, the importance of objects themselves, as opposed to just their spatial position, should be taken into account to optimize the perceived quality of the clustering process.

[0007] Other solutions have also been developed to improve the clustering process. One such solution is a culling process that removes objects that are perceptually irrelevant, such as due to masking or due to an object being silent. Although this process helps to improve clustering process, it does not provide an improved clustering result if the number of perceptually relevant objects is larger than the available output clusters.

[0008] The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.

BRIEF SUMMARY OF EMBODIMENTS

[0009] Some embodiments are directed to compressing object-based audio data for rendering in a playback system by identifying a first number of audio objects to be rendered in a playback system, where each audio object comprises audio data and associated metadata; defining an error threshold for certain parameters encoded within the associated metadata for each audio object; and grouping audio objects of the first number of audio objects into a reduced number of audio objects based on the error threshold so that the amount of data for the audio objects transmitted through the playback system is reduced.

[0010] Some embodiments are further directed to rendering object-based audio by identifying a spatial location of each object of a number of objects at defined time intervals, and grouping at least some of the objects into one or more time-varying clusters based on a maximum distance between pairs of objects and/or distortion errors caused by the grouping on certain other characteristics associated with the objects.

[0011] Some embodiments are directed to a method of compressing object-based audio data for rendering in a playback system by determining a perceptual importance of objects in an audio scene, wherein the objects comprise object audio data and associated metadata, and combining certain audio objects into clusters of audio objects based on the determined perceptual importance of the objects, wherein a number of clusters is less than an original number of objects in the audio scene. In this method, the perceptual importance may be a value derived from at least one of a loudness value and a content type of the respective object, and the content type is at least one of dialog, music, sound effects, ambiance, and noise.

[0012] In an embodiment of the method, the content type is determined by an audio classification process that receives an input audio signal for the audio objects and the loudness is obtained by a perceptual model based on a calculation of excitation levels in critical frequency bands of the input audio signal, with the method further comprising defining a centroid for a cluster around a first object of the audio objects and aggregating all excitation of the audio objects. The loudness value may be dependent at least in part on spatial proximity of a respective object to the other objects, and the spatial proximity may be defined at least in part by a position metadata value of the associated metadata for the respective object. The act of combining may cause certain spatial errors associated with each clustered object. In an embodiment, the method further comprises clustering the objects such that a spatial error is minimized for objects of relatively high perceptual importance. In an embodiment, the determined perceptual importance of the objects depends on a relative spatial location of the objects in the audio scene, and step of combining further comprises determining a number of centroids, with each centroid comprising a center of a cluster for grouping a plurality of audio objects, the centroid positions being dependent on the perceptual importance of one or more audio objects relative to other audio objects, and grouping the objects into one or more clusters by distributing object signals across the clusters. The clustering may further comprise grouping an object with a nearest neighbor, or distributing an object over one or more clusters using a panning method.

[0013] The act of combining audio objects may involve combining waveforms embodying the audio data for the constituent objects within the same cluster together to form a replacement object having a combined waveform of the constituent objects, and combining the metadata for the constituent objects within the same cluster together to form a replacement set of metadata for the constituent objects.

[0014] Some embodiments are further directed to a method of rendering object- based audio by defining a number of centroids, with each centroid comprising a center of a cluster for grouping a plurality of audio objects, determining a first spatial location of each object relative to the other objects of the plurality of audio objects, determining a relative importance of each audio object of the plurality of audio objects, said relative importance depending on the relative spatial locations of objects, determining a number of centroids, each centroid comprising a center of a cluster for grouping a plurality of audio objects, the centroid positions being dependent on the relative importance of one or more audio objects, and grouping the objects into one or more clusters by distributing object signals across the clusters. This method may further comprise determining a partial loudness of each audio object of the plurality of audio objects and a content type and associated content type importance of each audio object of the plurality of audio objects. In an embodiment, the partial loudness and the content type of each audio object are combined to determine the relative importance of a respective audio object. Objects are clustered such that a spatial error is minimized for objects of relatively high perceptual importance, where the spatial error may be caused by moving an object from a first perceived source location to a second perceived source location when clustered with other objects.

[0015] Some further embodiments are described for systems or devices and computer-readable media that implement the embodiments for the method of compressing or the method of rendering described above.

[0016] The methods and systems described herein may be implemented in an audio format and system that includes updated content creation tools, distribution methods and an enhanced user experience based on an adaptive audio system that includes new speaker and channel configurations, as well as a new spatial description format made possible by a suite of advanced content creation tools. In such a system, audio streams (generally including channels and objects) are transmitted along with metadata that describes the content creator's or sound mixer's intent, including desired position of the audio stream. The position can be expressed as a named channel (from within the predefined channel configuration) or as three- dimensional (3D) spatial position information.

INCORPORATION BY REFERENCE

[0017] Each publication, patent, and/or patent application mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual publication and/or patent application was specifically and individually indicated to be incorporated by reference.

BRIEF DESCRIPTION OF THE DRAWINGS

[0018] In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples, the one or more implementations are not limited to the examples depicted in the figures.

[0019] FIG. 1 illustrates the combination of channel and object-based data to produce an adaptive audio mix, under an embodiment.

[0020] FIG. 2A is a block diagram of a clustering process in conjunction with a codec circuit for rendering of adaptive audio content, under an embodiment.

[0021] FIG. 2B illustrates clustering objects and beds in an adaptive audio processing system, under an embodiment.

[0022] FIG. 2C illustrates clustering adaptive audio data in an overall adaptive audio rendering system, under an embodiment.

[0023] FIG. 3A illustrates the combination of audio signals and metadata for two objects to create a combined object, under an embodiment.

[0024] FIG. 3B is a table that illustrates example metadata definitions and combination methods for a clustering process, under an embodiment.

[0025] FIG. 4 is a block diagram of clustering schemes employed by a clustering process, under an embodiment.

[0026] FIGS. 5A and 5B illustrate the grouping of objects into clusters during periodic time intervals, under an embodiment.

[0027] FIGS. 6A, 6B, and 6C illustrate the grouping of objects into clusters in relation to defined object boundaries and error thresholds, under an embodiment.

[0028] FIG. 7 is a flowchart that illustrates a method of clustering objects and beds, under an embodiment.

[0029] FIG. 8 illustrates a system for clustering objects and bed channels into clusters based on perceptual importance in addition to spatial proximity, under an embodiment. [0030] FIG. 9 illustrates components of a process flow for clustering audio objects into output clusters, under an embodiment..

[0031] FIG. 10 is a functional diagram of an audio classification component, under an embodiment.

[0032] FIG. 1 1 is a flowchart illustrating an overall method of processing audio objects based on the perceptual factors of content type and loudness, under an embodiment.

[0033] FIG. 12 is a flowchart that illustrates a process of calculating cluster centroids and allocating objects to selected centroids, under an embodiment.

[0034] FIGS. 13A and 13B illustrate the grouping of objects into clusters based on certain perceptual criteria, under an embodiment.

[0035] FIG. 14 is a flowchart that illustrates a method of clustering objects and beds, under an embodiment.

[0036] FIG. 15 illustrates rendering clustered object data based on end-point device capabilities, under an embodiment

DETAILED DESCRIPTION OF THE INVENTION

[0037] Systems and methods are described for an object clustering-based compression scheme for object-based audio data. Embodiments of the clustering scheme utilize the perceptual importance of objects for allocating objects to clusters, and expands on clustering methods that are position and proximity-based. A perceptual-based clustering system augments proximity-based clustering with perceptual correlates derived from the audio signals of each object to derive an improved allocation of objects to clusters in constrained conditions, such as when the number of perceptually-relevant objects is larger than the number of output clusters.

[0038] In an embodiment of an audio processing system, an object combining or clustering process is controlled in part by the spatial proximity of the objects, and also by certain perceptual criteria. In general, clustering objects results in a certain amount of error since not all input objects can maintain spatial fidelity when clustered with other objects, especially in applications where a large number of objects are sparsely distributed. Objects with relatively high perceived importance will be favored in terms of minimizing spatial/perceptual errors with the clustering process. The object importance can be based on factors such as partial loudness, which is the perceived loudness of an object factoring the masking effects among other objects in the scene, and content semantics or type (e.g., dialog, music, effects, etc.).

[0039] Aspects of the one or more embodiments described herein may be implemented in an audio or audio-visual (AV) system that processes source audio information in a mixing, rendering and playback system that includes one or more computers or processing devices executing software instructions. Any of the described embodiments may be used alone or together with one another in any combination. Although various embodiments may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments do not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.

[0040] For purposes of the present description, the following terms have the associated meanings: the term "channel" or "bed" means an audio signal plus metadata in which the position is coded as a channel identifier, e.g., left-front or right-top surround; "channel-based audio" is audio formatted for playback through a pre-defined set of speaker zones with associated nominal locations, e.g., 5.1 , 7.1 , and so on; the term "object" or "object-based audio" means one or more audio channels with a parametric source description, such as apparent source position (e.g., 3D coordinates), apparent source width, etc.; "adaptive audio" means channel- based and/or object-based audio signals plus metadata that renders the audio signals based on the playback environment using an audio stream plus metadata in which the position is coded as a 3D position in space; and "rendering" means conversion to electrical signals used as speaker feeds.

[0041] In an embodiment, the scene simplification process using object clustering is implemented as part of an audio system that is configured to work with a sound format and processing system that may be referred to as a "spatial audio system" or "adaptive audio system." Such a system is based on an audio format and rendering technology to allow enhanced audience immersion, greater artistic control, and system flexibility and scalability. An overall adaptive audio system generally comprises an audio encoding, distribution, and decoding system configured to generate one or more bitstreams containing both conventional channel-based audio elements and audio object coding elements. Such a combined approach provides greater coding efficiency and rendering flexibility compared to either channel-based or object-based approaches taken separately. An example of an adaptive audio system that may be used in conjunction with present embodiments is described in pending International Patent Application No. PCT/US2012/044388 filed 27 June 2012, and entitled "System and Method for Adaptive Audio Signal Generation, Coding and Rendering," which is hereby incorporated by reference. An example implementation of an adaptive audio system and associated audio format is the Dolby® Atmos™ platform. Such a system incorporates a height (up/down) dimension that may be implemented as a 9.1 surround system, or similar surround sound configuration.

[0042] Audio objects can be considered individual or collections of sound elements that may be perceived to emanate from a particular physical location or locations in the listening environment. Such objects can be static (that is, stationary) or dynamic (that is, moving). Audio objects are controlled by metadata that defines the position of the sound at a given point in time, along with other functions. When objects are played back, they are rendered according to the positional metadata using the speakers that are present, rather than necessarily being output to a predefined physical channel. A track in a session can be an audio object, and standard panning data is analogous to positional metadata. In this way, content placed on the screen might pan in effectively the same way as with channel-based content, but content placed in the surrounds can be rendered to individual speakers, if desired. While the use of audio objects provides control over discrete effects, other aspects of a soundtrack may work more effectively in a channel-based environment. For example, many ambient effects or reverberation actually benefit from being fed to arrays of speakers rather than individual drivers. Although these could be treated as objects with sufficient width to fill an array, it is beneficial to retain some channel-based functionality.

[0043] The adaptive audio system is configured to support "beds" in addition to audio objects, where beds are effectively channel-based sub-mixes or stems. These can be delivered for final playback (rendering) either individually, or combined into a single bed, depending on the intent of the content creator. These beds can be created in different channel-based configurations such as 5.1 , 7.1 , and 9.1 , and arrays that include overhead speakers. FIG. 1 illustrates the combination of channel and object-based data to produce an adaptive audio mix, under an embodiment. As shown in process 1 00, the channel-based data 1 02, which, for example, may be 5.1 or 7.1 surround sound data provided in the form of pulse-code modulated (PCM) data is combined with audio object data 104 to produce an adaptive audio mix 108. The audio object data 104 is produced by combining the elements of the original channel-based data with associated metadata that specifies certain parameters pertaining to the location of the audio objects. As shown conceptually in FIG. 1 , the authoring tools provide the ability to create audio programs that contain a

combination of speaker channel groups and object channels simultaneously. For example, an audio program could contain one or more speaker channels optionally organized into groups (or tracks, e.g., a stereo or 5.1 track), descriptive metadata for one or more speaker channels, one or more object channels, and descriptive metadata for one or more object channels.

[0044] An adaptive audio system extends beyond speaker feeds as a means for distributing spatial audio and uses advanced model-based audio descriptions to tailor playback configurations that suit individual needs and system constraints so that audio can be rendered specifically for individual configurations. The spatial effects of audio signals are critical in providing an immersive experience for the listener. Sounds that are meant to emanate from a specific region of a viewing screen or room should be played through speaker(s) located at that same relative location. Thus, the primary audio metadatum of a sound event in a model-based description is position, though other parameters such as size, orientation, velocity and acoustic dispersion can also be described.

[0045] As stated above, adaptive audio content may comprise several bed channels 102 along with many individual audio objects 104 that are combined during rendering to create a spatially diverse and immersive audio experience. In a cinema environment with a great deal of processing bandwidth, virtually any number of beds and objects can be created and accurately rendered in a theater. However, as cinema or other complex audio content is produced for distribution and reproduction in home or personal listening environments the relatively limited processing bandwidth of such devices and media prevent optimum rendering or playback of this content. For example, typical transmission media used for consumer and professional applications include Blu-ray disc, broadcast (cable, satellite and terrestrial), mobile (3G and 4G) and over the top (OTT) or Internet distribution.

These media channels may pose significant limitations on the available bandwidth to digitally transmit all of the bed and object information of adaptive audio content. Embodiments are directed to mechanisms to compress complex adaptive audio content so that it may be distributed through transmission systems that may not possess large enough available bandwidth to otherwise render all of audio bed and object data.

[0046] With current monophonic, stereo and multichannel audio content, the bandwidth constraints of the aforementioned delivery methods and networks are such that audio coding is generally required to reduce the bandwidth required to match the available bandwidth of the distribution method. Present cinema systems are capable of providing uncompressed audio data at a bandwidth on the order of 10 Mbps for typical 7.1 cinema format. In comparison to this capacity, the available bandwidth for the various other delivery methods and playback systems is substantially less. For example, disc-based bandwidth is on the order of several hundred kbps up to tens of Mbps; broadcast bandwidth is on the order of several hundred kbps down to tens of kbps; OTT Internet bandwidth is on the order of several hundred kbps up to several Mbps; and mobile (3G / 4G) is only on the order of several hundred kbps down to tens of kbps. Because adaptive audio contains additional audio essence that is part of the format, i.e., objects 104 in addition to channel beds 102, the already significant constraints on transmission bandwidth are exacerbated above and beyond normal channel based audio formats, and additional reductions in bandwidth are required in addition to audio coding tools to facilitate accurate reproduction in reduced bandwidth transmission and playback systems. Scene Simplification Through Object Clustering

[0047] In an embodiment, an adaptive audio system provides a component to reduce the bandwidth of object-based audio content through object clustering and perceptually transparent simplifications of the spatial scenes created by the combination of channel beds and objects. An object clustering process executed by the component uses certain information about the objects, including spatial position, content type, temporal attributes, object width, and loudness, to reduce the complexity of the spatial scene by grouping like objects into object clusters that replace the original objects. [0048] The additional audio processing for standard audio coding to distribute and render a compelling user experience based on the original complex bed and audio tracks is generally referred to as scene simplification and/or object clustering. The purpose of this processing is to reduce the spatial scene through clustering or grouping techniques that reduce the number of individual audio elements (beds and objects) to be delivered to the reproduction device, but that still retain enough spatial information so that the perceived difference between the originally authored content and the rendered output is minimized.

[0049] The scene simplification process facilitates the rendering of object-plus- bed content in reduced bandwidth channels or coding systems using information about the objects including spatial position, temporal attributes, content type, width, and other appropriate characteristics to dynamically cluster objects to a reduced number. This process can reduce the number of objects by performing the following clustering operations: (1 ) clustering objects to objects; (2) clustering object with beds; and (3) clustering objects and beds to objects. In addition, an object can be distributed over two or more clusters. The process further uses certain temporal and/or perceptual information about objects to control clustering and de-clustering of objects. Object clusters replace the individual waveforms and metadata elements of constituent objects with a single equivalent waveform and metadata set, so that data for N objects is replaced with data for a single object, thus essentially compressing object data from N to 1 . As mentioned above, alternatively, or additionally, an object or bed channel may be distributed over more than one cluster (for example using amplitude panning techniques), compressing object data from N to M, with M < N. The clustering process utilizes an error metric based on distortion due to a change in location, loudness or other characteristic of the clustered objects to determine an optimum tradeoff between clustering compression versus sound degradation of the clustered objects. The clustering process can be performed synchronously or it can be event-driven, such as by using auditory scene analysis (ASA) and event boundary detection to control object simplification through clustering. In some embodiments, the process may utilize knowledge of endpoint rendering algorithms and devices to control clustering. In this way, certain characteristics or properties of the playback device may be used to inform the clustering process. For example, different clustering schemes may be utilized for speakers versus headphones or other audio drivers, or different clustering schemes may be utilized for lossless versus lossy coding, and so on.

[0050] For purposes of the following description, the terms 'clustering' and 'grouping' or 'combining' are used interchangeably to describe the combination of objects and/or beds (channels) to reduce the amount of data in a unit of adaptive audio content for transmission and rendering in an adaptive audio playback system; and the terms 'compression' or 'reduction' may be used to refer to the act of performing scene simplification of adaptive audio through such clustering of objects and beds. The terms 'clustering', 'grouping' or 'combining' throughout this description are not limited to a strictly unique assignment of an object or bed channel to a single cluster only, instead, an object or bed channel may be distributed over more than one output bed or cluster using weights or gain vectors that determine the relative contribution of an object or bed signal to the output cluster or output bed signal.

[0051] FIG. 2A is a block diagram of a clustering component executing a clustering process in conjunction with a codec circuit for rendering of adaptive audio content, under an embodiment. As shown in diagram 200, circuit 200 includes encoder 204 and decoder 206 stages that process input audio signals to produce output audio signals at a reduced bandwidth. For the example shown in FIG. 2A, a portion 209 of the input signals may be processed through known compression techniques to produce a compressed audio bitstream 205 that is decoded by decoder stage 206 to produce at least a portion of output 207. Such known compression techniques involve analyzing the input audio content 209, quantizing the audio data and then performing compression techniques, such as masking, etc. on the audio data itself. The compression techniques may be lossy or lossless and are implemented in systems that may allow the user to select a compressed bandwidth, such as 192kbps, 256kbps, 512kbps, and so on.

[0052] In an adaptive audio system, at least a portion of the input audio comprises input signals 201 including objects that consist of audio and metadata. The metadata defines certain characteristics of the associated audio content, such as object spatial position, content type, loudness, and so on. Any practical number of audio objects (e.g., hundreds of objects) may be processed through the system for playback. To facilitate accurate playback of these multitude of objects in a wide variety of playback systems and transmission media, system 200 includes a clustering process or component 202 that reduces the number of objects into a smaller more manageable number of objects by combining the original objects into a smaller number of object groups. The clustering process thus builds groups of objects to produce a smaller number of output groups 203 from an original set of individual input objects 201 . The clustering process 202 essentially processes the metadata of the objects as well as the audio data itself to produce the reduced number of object groups. The metadata is analyzed to determine which objects at any point in time are most appropriately combined with other objects, and the corresponding audio waveforms for the combined objects are then summed together to produce a substitute or combined object. The combined object groups are then input to the encoder 204, which generates a bitstream 205 containing the audio and metadata for transmission to the decoder 206.

[0053] In general, the adaptive audio system incorporating the object clustering process 202 includes components that generate metadata from the original spatial audio format. The codec circuit 200 comprises part of an audio rendering system configured to process one or more bitstreams containing both conventional channel- based audio elements and audio object coding elements. An extension layer containing the audio object coding elements is added to either one of the channel- based audio codec bitstream or the audio object bitstream. This approach enables bitstreams 205, which include the extension layer to be processed by renderers for use with existing speaker and driver designs or next generation speakers utilizing individually addressable drivers and driver definitions. The spatial audio content from the spatial audio processor comprises audio objects, channels, and position metadata. When an object is rendered, it is assigned to one or more speakers according to the position metadata, and the location of the playback speakers.

Additional metadata may be associated with the object to alter the playback location or otherwise limit the speakers that are to be used for playback. Metadata may be generated in the audio workstation in response to the engineer's mixing inputs to provide rendering cues that control spatial parameters (e.g., position, velocity, intensity, timbre, etc.) and specify which driver(s) or speaker(s) in the listening environment play respective sounds during exhibition. The metadata is associated with the respective audio data in the workstation for packaging and transport by spatial audio processor. [0054] FIG. 2B illustrates clustering objects and beds in an adaptive audio processing system, under an embodiment. As shown in diagram 250, an object processing component 256 performing certain scene simplification tasks reads in an arbitrary number of input audio files and metadata. The input audio files comprise input objects 252 and associated object metadata, and beds 254 and associated bed metadata. This input file /metadata thus correspond to either "beds" or "objects" tracks. The object processing component 256 combines media intelligence/content classification, spatial distortion analysis and object selection/clustering to create a smaller number of output objects and bed tracks. In particular, objects can be clustered together to create new equivalent objects or object clusters 258, with associated object/cluster metadata. The objects can also be selected for

'downmixing' into beds. This is shown as the output of downmixed objects 260 input to a renderer 266 for combination 268 with beds 262 to form output bed objects and associated metadata 270. The output bed configuration 270 (e.g., a typical 5.1 for the home) does not necessarily need to match the input bed configuration, which for example could be 9.1 for Atmos™ cinema. New metadata is generated for the output tracks by combining metadata from the input tracks. New audio is also generated for the output tracks by combining audio from the input tracks.

[0055] The object processing component 256 utilizes certain processing configuration information 272. In an embodiment, these include the number of output objects, the frame size and certain media intelligence settings. Media intelligence can include several parameters or characteristics associated with the objects, such as content type (i.e., dialog/music/effects/etc. ), regions

(segment/classification), preprocessing results, auditory scene analysis results, and other similar information.

[0056] In an alternative embodiment, audio generation could be deferred by keeping a reference to all original tracks as well simplification metadata (e.g., which objects belongs to which cluster, which objects are to be rendered to beds, etc.). This can be useful to distribute the simplification process between a studio and an encoding house, or other similar scenario.

[0057] FIG. 2C illustrates clustering adaptive audio data in an overall adaptive audio rendering system, under an embodiment. The overall processing system 220 comprises three main stages of post-production 221 , transmission (delivery/streaming) 223, and the playback system 225 (home/theater/studio). As shown in FIG. 2C, dynamic clustering processes to simplify the audio content by combining an original number of objects into a reduced number of objects or object clusters may be performed during one or any of these stages.

[0058] In the post-production stage 221 , the input audio data 222, which could be cinema and/or home based adaptive audio content, is input to a metadata generation process 224. This process generates spatial metadata for the objects including: position, width, decorrelation, and rendering mode information, and well as content metadata including: content type, object boundaries and relative importance

(energy/loudness). A clustering process 226 is then applied to the input data to reduce the overall number input objects into a smaller number of objects by combining certain objects together based on their spatial proximity, temporal proximity, or other characteristics. The clustering process 226 may be a dynamic clustering process that performs clustering as a constant or periodic process as the input data is processed in the system, and it may utilize user input 228 that specifies certain constraints such as target number of clusters, importance weighting to objects/clusters, filtering effects, and so on. The post-production stage may also include a cluster down-mixing step that provides certain processing of the clusters, such as mix, decorrelation, limiters, and so on. The post-production stage may include a render/monitor option 232 that allows the audio engineer to monitor or listen to the result of the clustering process, and modify the input data 222 or user input 228 if the results are not adequate.

[0059] The transmission stage 223 generally comprises components that perform raw data to codec interfacing 234, and packaging of the audio data into the appropriate output format 236 for delivery or streaming of the digital data using the appropriate codec (e.g., TrueHD, Dolby Digital+, etc.). In the transmission stage 223, a further dynamic clustering process 238 may also be applied to the objects that are produced during the post-production stage 221 .

[0060] The playback system 225 receives the transmitted digital audio data and performs a final render step 242 for playback through the appropriate equipment (e.g., amplifiers plus speakers). During this stage an additional dynamic clustering process 240 may be applied using certain user input 244 and playback system (compute) capability 245 information to further group objects into clusters. [0061] In an embodiment, the clustering processes 240 and 238 performed in either the transmission or playback stages may be limited clustering processes in that the amount of object clustering may be limited as compared to the post- production clustering process 226 in terms of number of clusters formed and/or the amount and type of information used to perform the clustering.

[0062] FIG. 3A illustrates the combination of audio signals and metadata for two objects to create a combined object, under an embodiment. As shown in diagram 300, a first object comprises an audio signal shown as waveform 302 along with metadata 31 2 for each defined period of time (e.g., 20 milliseconds). Thus, for example, if waveform 302 is a 60 millisecond audio clip, there are three different metadata instances for the first object, denoted MD1 , MD2, and MD3. For the same time interval, a second object comprises an audio waveform 304 and three different corresponding metadata instances denoted MDa, MDb, and MDc. The clustering process 202 combines the two objects to create a combined object that comprises waveform 306 and associated metadata 316. In an embodiment, the original first and second waveforms 302 and 304 are combined by summing the waveforms to create combined waveform 306. Alternatively, the waveforms can be combined by other waveform combination methods depending on the system implementation. The metadata at each period for first and second objects are also combined to produce combined metadata 316 denoted MD1 a, MD2b, and MD3c. The

combination of metadata elements is performed according to defined algorithms or combinatorial functions, and can vary depending on system implementation.

Different types of metadata can be combined in various different ways.

[0063] FIG. 3B is a table that illustrates example metadata definitions and combination methods for a clustering process, under an embodiment. As shown in column 352 of table 350, the metadata definitions include metadata types such as: object position, object width, audio content type, loudness, rendering modes, control signals, among other possible metadata types. The metadata definitions include elements that define certain values associated with each metadata type. Example metadata elements for each metadata type are listed in column 354 of table 350. When two or more objects are combined together in the clustering process 202, their respective metadata elements are combined through a defined combination scheme. Example combination schemes for each metadata type are listed in column 356 of table 350. As shown in FIG. 3B, the position and widths of two or more objects may each be combined through a weighted average to derive the position and width of the combined object. With respect to position, the geometric center of a centroid encompassing the clustered (constituent) objects can be used to represent the position of the replacement object. The combination of metadata may employ weights to determine the (relative) contribution of the metadata of the constituent objects. Such weights may be derived from the (partial) loudness of one or more objects and/or bed channels.

[0064] The loudness of the combined object may be derived by averaging or summing the loudness of the constituent objects. In an embodiment, the loudness metric of a signal represents the perceptual energy of the signal, which is a measure of the energy that is weighted based on frequency. Loudness is thus a spectrally weighted energy that corresponds to a listener's perception of the sound. In an alternative embodiment, instead of, or along with loudness, the process may use the pure energy (RMS energy) of the signal, or some other measure of signal energy as a factor in determining the importance of an object. In yet an alternative

embodiment, the loudness of the combined object is derived from the partial loudness data of the clustered objects, in which the partial loudness represents the (relative) loudness of an object in the context of the complete set of objects and beds according to psychoacoustic principles. Thus, as shown in table 350, the loudness metadata type may be embodied as an absolute loudness, a partial loudness or a combined loudness metadata definition. Partial loudness (or relative importance) of an object can be used for clustering as an importance metric, or as means to selectively render objects if the rendering system does not have sufficient capabilities to render all objects individually.

[0065] Other metadata types may require other combination methods. For example, certain metadata cannot be combined through a logical or arithmetic operation, and thus a selection must be made. For example, in the case of rendering mode, which is either one mode or another, the rendering mode of the dominant object is assigned to be the rendering mode of the combined object. Other types of metadata, such as control signals and the like may be selected or combined depending on application and metadata characteristics.

[0066] With regard to content type, audio is generally classified into one of a number of defined content types, such as dialog, music, ambience, special effects, and so on. An object may change content type throughout its duration, but at any specific point in time it is generally only one type of content. The content type is thus expressed as a probability that the object is a particular type of content at any point in time. Thus, for example, a constant dialog object would be expressed as a one- hundred percent probability dialog object, while an object that transforms from dialog to music may be expressed as fifty percent dialog/fifty percent music. Clustering objects that have different content types could be performed by averaging their respective probabilities for each content type, selecting the content type probabilities for the most dominant object, or some other logical combination of content type measures. The content type may also be expressed as an n-dimensional vector (where n is the total number of different content types, e.g., four, in the case of dialog/music/ambience/effects). The content type of the clustered objects may then be derived by performing an appropriate vector operation. As shown in table 350, the content type metadata may be embodied as a combined content type metadata definition, where a combination of content types reflects the probability distributions that are combined (e.g., a vector of probabilities of music, speech, etc.).

[0067] With regard to classification of audio, in an embodiment, the process operates on a per time-frame basis to analyze the signal, identify features of the signal and compare the identified features to features of known classes in order to determine how well the features of the object match the features of a particular class. Based on how well the features match a particular class, the classifier can identify a probability of an object belonging to a particular class. For example, if at time t = T the features of an object match very well with dialog features, then the object would be classified as dialog with a high probability. If, at time = T+N, the features of an object match very well with music features, the object would be classified as music with a high probability. Finally, if at time T = T+2N the features of an object do not match particularly well with either dialog or music, the object might be classified as 50% music and 50% dialog.

[0068] The listing of metadata definitions in FIG. 3B is intended to be illustrative of certain example metadata definitions, and many other metadata elements are also possible, such as driver definitions (number, characteristics, position, projection angle), calibration information including room and speaker information, and any other appropriate metadata.

[0069] In an embodiment and with reference to FIG. 2A, the clustering process 202 is provided in a component or circuit that is separate from the encoder 204 and decoder 206 stages of the codec. The codec 204 may be configured to process both raw audio data 209 for compression using known compression techniques as well as processing adaptive audio data 201 that contains audio plus metadata definitions. In general, the clustering process is implemented as a pre-encoder and post-decoder process that clusters objects into groups before the encoder stage 204 and renders the clustered objects after the decoder stage 206. Alternatively, the clustering process 202 may be included as part of the encoder 204 stage as an integrated component.

[0070] FIG. 4 is a block diagram of clustering schemes employed by the clustering process of FIG. 2, under an embodiment. As shown in diagram 400, a first clustering scheme 402 focuses on the clustering individual objects with other objects to form one or more clusters of objects that can be transmitted with reduced information. This reduction can either be in the form of less audio or less metadata describing multiple objects. One example of clustering of objects is to group objects that are spatially related, i.e., to combine objects that are located in a similar spatial position, wherein the 'similarity' of the spatial position is defined by a maximum error threshold based on distortion due to shifting constituent objects to a position defined by the replacement cluster.

[0071] A second clustering scheme 404 determines when it is appropriate to combine audio objects that may be spatially diverse with channel beds that represent fixed spatial locations. An example of this type of clustering is when there is not enough available bandwidth to transmit an object that may be originally represented as traversing in a three dimensional space, and instead to mix the object into its projection onto the horizontal plane, which is where channel beds are typically represented. This allows one or more objects to be dynamically mixed into the static channels, thereby reducing the number of objects that need to be transmitted.

[0072] A third clustering scheme 406 uses prior knowledge of certain known system characteristics. For example, knowledge of the endpoint rendering algorithms and/or the reproduction devices in the playback system may be used to control the clustering process. For example, a typical home theater configuration relies on physical speakers located in fixed locations. These systems may also rely on speaker virtualization algorithms that compensate for the absence of some speakers in the room and use algorithms to give the listener virtual speakers that exist within the room. If information such as the spatial diversity of the speakers and the accuracy of virtualization algorithms is known, then it may be possible to send a reduced number of objects because the speaker configuration and virtualization algorithms can only provide a limited perceptual experience to a listener. In this case, sending a full bed plus object representation may be a waste of bandwidth, so some degree of clustering would be appropriate. Other types of known information could also be used in this clustering scheme, such as the content type of the object or objects to control clustering, or the width of an object or objects to control clustering. For this embodiment, the codec circuit 200 may be configured to adapt the output audio signals 207 based on the playback device. This feature allows a user or other process to define the number of grouped clusters 203, as well as the compression rate for the compressed audio 21 1 . Since different transmission media and playback devices can have significantly different bandwidth capacity, a flexible compression scheme for both standard compression algorithms as well as object clustering can be advantageous. For example, if the input comprises a first number, e.g., 100 original objects, the clustering process may be configured to generate 20 combined groups 203 for Blu-ray systems or 10 objects for cell phone playback, and so on. The clustering process 202 may be recursively applied to generate incrementally fewer clustered groups 203 so that different sets of output signals 207 may be provided for different playback applications.

[0073] A fourth clustering scheme 408 comprises the use of temporal information to control the dynamic clustering and de-clustering of objects. In one embodiment, the clustering process is performed at regular intervals or periods (e.g., once every 10 milliseconds). Alternatively, other temporal events can be used, including techniques such as auditory scene analysis (ASA) and auditory event boundary detection to analyze and process the audio content to determine the optimum clustering configurations based on the duration of individual objects.

[0074] It should be noted that the schemes illustrated in diagram 400 can be performed by the clustering process 202 either as stand-alone acts or in combination with one or more other schemes. They may also be performed in any order relative to the other schemes, and no particular order is required for execution of the clustering process.

[0075] For the case where clustering is based on spatial position 402, the original objects are grouped into clusters for which a spatial centroid is dynamically constructed. The position of the centroid becomes the new position of the group. The audio signal for the group is a mix-down of all the original audio signals for each object belonging to the group. Each cluster can be seen as a new object that approximates its original contents but shares the same core attributes/data structures as the original input objects. As a result, each object cluster can be directly processed by the object renderer.

[0076] In an embodiment, the clustering process dynamically groups an original number of audio objects and/or bed channels into a target number of new equivalent objects and bed channels. In most practical applications, the target number is substantially lower than the original number, e.g., 100 original input tracks combined into 20 or fewer combined groups. These solutions apply to scenarios where both bed and object channels are available either as an input and/or an output to the clustering process. A first solution to support both objects and bed tracks is to process input bed tracks as objects with fixed pre-defined position in space. This allows the system to simplify a scene comprising, for example, both objects and beds into a target number of object tracks only. However, it might also be desirable to preserve a number of output bed tracks as part of the clustering process. Less important objects can then be rendered directly to the bed tracks as a pre-process, while the most important ones can be further clustered into a smaller target number of equivalent object tracks. If some of the resulting clusters have high distortion they can also be rendered to beds as a post-process, as this may result in a better approximation of the original content. This decision can be made on a time-varying basis, since the error/distortion is a time-varying function.

[0077] In an embodiment, the clustering process involves analyzing the audio content of every individual input track (object or bed) 201 as well as the attached metadata (e.g., the spatial position of the objects) to derive an equivalent number of output object/bed tracks that minimizes a given error metric. In a basic

implementation, the error metric is based on the spatial distortion due to shifting the clustered objects and can further be weighted by a measure of the importance of each object over time. The importance of an object can encapsulate other characteristics of the object, such as loudness, content type, and other relevant factors. Alternatively, these other factors can form separate error metrics that can be combined with the spatial error metric.

Error Calculation [0078] The clustering process essentially represents a type of lossy compression scheme that reduces the amount of data transmitted through the system, but that inherently introduces some amount of content degradation due to the combination of original objects into a fewer number of rendered objects. As stated above, the degradation due to the clustering of objects is quantified by an error metric. The greater the reduction of original objects into relatively few combined groups and/or the greater the amount of spatial collapsing of original objects into combined groups, the greater the error, in general. In an embodiment, the error metric used in the clustering process is expressed as shown in Equation 1 :

E(s,c)[t] = lmportance_s[t] * dist(s,c)[t] (1 )

[0079] As stated above, an object may be distributed over more than one cluster, rather than grouped into a single cluster with other objects. When an object signal x(s)[t] with index s is distributed over more than one cluster c with representative cluster audio signals y(c)[t] using amplitude gains g(s,c)[t] is as shown in Equation 2: y(c)[t] = sum_s g(s,c)[t] * x(s)[t] (2) The error metric E(s,c)[t] for each cluster c can be weighted combination of the terms expressed in Equation 1 with weights that are a function of the amplitude gains g(s,c)[t] as shown in Equation 3:

E(s,c)[t] = sum_s ( f(g(s,c)[t]) * lmportance_s[t] * dist(s,c)[t] ) (3)

[0080] In an embodiment, the clustering process supports objects with a width or spread parameter. Width is used for objects that are not rendered as pinpoint sources but rather as sounds with an apparent spatial extent. As the width parameter increases, the rendered sound becomes more spatially diffuse and consequently, its specific location becomes less relevant. It is thus advantageous to include width in the clustering distortion metric so that it favors more positional error as the width increases. The error expression E(s,c) can thus be modified to accommodate a width metric, as shown in Equation 4:

E(s,c)[t] = lmportance_s[t] * ( <x * (1 -Width_s[t]) * dist(s,c)[t] + (1 -a) * Width_s[t]) (4)

[0081] In the Equations 1 and 3 above, the importance factor s is the relative importance of the object, c the centroid of the cluster, and dist(s,c) the Euclidean three-dimensional distance between the object and the centroid of the cluster. All of these quantities are time-varying as denoted by the [t] term. A weighting term a can also be introduced to control the relative weight of size versus position of an object.

[0082] The importance function, lmportance_s[t], can be a combination of signal- based metrics such as the loudness of the signal with higher level measure of how salient each object is relative to the rest of the mix. For example, a spectral similarity measure computed for each pair of input objects can further weight the loudness metric so that similar signals tend to be grouped together. For cinematic content as an example, it might also be desirable to give more importance to on-screen objects, in which case the importance can be further weighted by a directional dot-product term which is maximal for front-center objects and diminishes as the objects move off-screen.

[0083] When constructing the clusters, the importance function is temporally smoothed over a relatively long time window (e.g. 0.5 second) to ensure that the clustering is temporally consistent. In this context, including look-ahead or prior knowledge of object start and stop times can improve the accuracy of the clustering. In contrast, the equivalent spatial location of the cluster centroid can be adapted at a higher rate (10 to 40 milliseconds) using a higher rate estimate of the importance function. Sudden changes or increments in the importance metric (for example using a transient detector) may temporarily shorten the relatively long time window, or reset any analysis states in relation to the long time window.

[0084] As stated above, other information such as content type can be also included in the error metric as an additional importance weighting term. For instance, in a movie soundtrack dialog might be considered more important than music and sound effects. It would therefore be preferable to separate dialog in one or a few dialog-only clusters by increasing the relative importance of the corresponding objects. The relative importance of each object could also be provided or manually adjusted by a user. Similarly, only a specific subset of the original objects can be clustered or simplified if the user so desires, while the others would be preserved as individually rendered objects. The content type information could also be generated automatically using media intelligence techniques to classify audio content.

[0085] The error metric E(s,c) could be a function of several error components based on the combined metadata elements. Thus, other information besides distance could factor in the clustering error. For example, like objects may be clustered together rather than disparate objects, based on object type, such as dialog, music, effects, and so on. Combining objects of different types that are incompatible can result in distortion or degradation of the output sound. Error could also be introduced due to inappropriate or less than optimum rendering modes for one or more of the clustered objects. Likewise, certain control signals for specific objects may be disregarded or compromised for clustered objects. An overall error term may thus be defined that represents the sum of errors for each metadata element that is combined when an object is clustered. An example expression of overall error is shown in Equation 5: Eoverallt] (5)

[0086] In Equation 5, MDn represents specific metadata elements of N metadata elements that are combined for each object that is merged in a cluster, and E M Dn represents the error associated with combining that metadata value with

corresponding metadata values for other objects in a cluster. The error value may be expressed as a percentage value for metadata values that are averaged (e.g., position/loudness), or as a binary 0 percent or 100 percent value for metadata values that are selected as one value or another (e.g., rendering mode), or any other appropriate error metric. For the metadata elements illustrated in FIG. 3B, the overall error could be expressed as shown in Equation 6:

Eoverallt]— E S p a tial + E| 0 udness + E ren dering + E CO ntrol (6) [0087] The different error components other than spatial error can be used as criteria for the clustering and de-clustering of objects. For example, loudness may be used to control the clustering behavior. Specific loudness is a perceptual measure of loudness based on psychoacoustic principles. By measuring the specific loudness of different objects, the perceived loudness of an object may guide whether it is clustered or not. For example, a loud object is likely to be more apparent to a listener if it's spatial trajectory is modified, while the opposite is generally true for quieter objects. Therefore, specific loudness could be used as a weighting factor in addition to spatial error to control the clustering of objects. Another example is object type, wherein some types of objects may be more perceptible if their spatial organization is modified. For example, humans are very sensitive to speech signals and these types of objects may need to be treated differently than other objects such as noise-like or ambient effects for which spatial perception is less acute. Therefore, object type (such as speech, effects, ambience, etc.) could be used as a weighting factor in addition to spatial error to control the clustering of objects.

[0088] The clustering process 202 thus combines objects into clusters based on certain characteristics of the objects and a defined amount of error that cannot be exceeded. As shown in FIG. 3A, the clustering process 202 dynamically recomputes the object groups 203 to constantly build object groups at different or periodic time intervals to optimize object grouping on a temporal basis. The substitute or combined object group comprises a new metadata set that represents a combination of the metadata of the constituent objects and an audio signal that represents a summation of the constituent object audio signals. The example shown in FIG. 3A illustrates the case where the combined object 306 is derived by combining original objects 302 and 304 for a particular point in time. At a later time, the combined object could be derived by combining one or more other or different original objects, depending upon the dynamic processing performed by the clustering process.

[0089] In one embodiment, the clustering process analyzes the objects and performs clustering at regular periodic intervals, such as once every 10 milliseconds, or any other appropriate time period. FIGS. 5A to 5B illustrate the grouping of objects into clusters during periodic time intervals, under an embodiment. As shown in diagram 500, which plots the position or location of objects at particular points in time. Various objects can exist in different locations at any one point in time, and the objects can be of different widths, as shown in FIG. 5A, where object 0 3 is shown to have larger width than the other objects. The clustering process analyzes the objects to form groups of objects that are spatially close enough together relative to a defined maximum error threshold value. Objects that separated from one another within a distance defined by the error threshold 502 are eligible to be clustered together, thus objects Oi to O 3 can be clustered together within an object cluster A, and objects O 4 and O 5 can be clustered together in a different object cluster B.

These clusters are formed based on the relative positions of the objects at a certain time (e.g., T = 0 milliseconds). In the next time period, the objects may have moved or changed in terms of one or more of the metadata characteristics, in which case the object clusters may be re-defined. Each object cluster replaces the constituent objects with a different waveform and metadata set. Thus, object cluster A comprises a waveform and metadata set that is rendered in place of the individual waveforms and metadata for each of objects Oi to O 3 .

[0090] FIG. 5B illustrates a different clustering of the objects at a next time period (e.g., Time = 10 milliseconds). In the example of diagram 550, object O 5 has moved away from object O 4 and within a close proximity to another object, object O 6 . In this case, object cluster B now comprises objects O 5 to O 6 and object O 4 becomes de- clustered and is rendered as a standalone object. Other factors may also cause objects to be de-clustered or to change clusters. For example, the width or loudness (or other parameter) of an object may become large or different enough from its neighbors so that it should no longer be clustered with them. Thus, as shown in FIG. 5B, object O 3 may become wide enough so that it is declustered from object cluster A and also rendered alone. It should be noted that the horizontal axis in FIGS. 5A- 5B does not represent time, but instead is used as a dimension with which to spatially distribute multiple objects for visual organization and sake of discussion. The entire top of the diagram(s) represents a moment or snapshot at time t of all of the objects and how they are clustered.

[0091] Instead of performing clustering every time period as shown in FIGS 5A to 5B, the clustering process may cluster objects based on a trigger condition or event associated with the objects. One such trigger condition is the start and stop times for each object. FIGS. 6A to 6C illustrate the grouping of objects into clusters in relation to defined object boundaries and error thresholds, under an embodiment. As a threshold step, each object must be defined within a specific time period. Various different methods may be used to define objects in time. In one embodiment, object start/stop temporal information can be used to define objects for the clustering process. This method utilizes explicit time-based boundary information that defines the start point and stop point of an audio object. Alternatively, an auditory scene analysis technique can be used to identify the event boundaries that define an object in time. Such a technique is described in U.S. Patent 7,71 1 , 123, which is hereby incorporated by reference, and which is attached hereto as Exhibit B. The detected auditory scene event boundaries are perceptually relevant moments in time where there is a perceptual change in the audio that can be used to provide "perceptual masking" within the audio where changes can be made to the audio that are not heard by a listener.

[0092] FIGS. 6A to 6C illustrate the use of auditory scene analysis and audio event detection, or other similar methods, to control the clustering of audio objects using a clustering process, under an embodiment. The examples of these figures outlines the use of detected auditory events to define clusters and remove an audio object from an object cluster based on a defined error threshold. FIG. 6A is a diagram 600 that shows the creation of object clusters in a plot of spatial error at a particular time (t). Two audio object clusters denoted cluster A and cluster B such that object cluster A is comprised of four audio objects d through 0 4 and object cluster B is comprised of three audio objects 0 5 through 0 7. The vertical dimension of diagram 600 indicates the spatial error, which is a measure of how dissimilar a spatial object is from the rest of the clustered objects and can be used to remove the object from the cluster. Also shown in diagram 600 are detected auditory event boundaries 604 for the various individual objects d through 0 7 . As each object represents an audio waveform, it is possible at any given moment in time for an object to have a detected auditory event boundary 604. As shown in the diagram 600, at time = t, objects d and 0 6 have detected auditory event boundaries in each of their audio signals. It should be noted that the horizontal axis in FIGS. 6A-6C does not represent time, but instead is used as a dimension with which to spatially distribute multiple objects for visual organization and sake of discussion. The entire top of the diagram represents a moment or snapshot at time t of all of the objects and how they are clustered.

[0093] As shown in FIG. 6A, a spatial error threshold value 602. This value represents the amount of error that must be exceeded to remove an object from a cluster. That is, if an object is separated from other objects in a potential cluster by an amount that exceeds this error threshold 602, that object is not included in the cluster. Thus, for the example of FIG. 6A, none of the individual objects have a spatial error that exceeds the spatial error threshold that is indicated by threshold value 602, and therefore no de-clustering should take place.

[0094] FIG. 6B illustrates the clustering example of FIG. 6A at a time = t + N, which is some finite amount of time after t where the spatial error of each of the objects has changed slightly for objects d through 0 3 and 0 5 through 0 7. In this example, object 0 4 has a spatial error that exceeds the predefined spatial error threshold 622. It should be noted that at time = t + N auditory event boundaries have been detected for objects 0 2 and 0 4 which indicates that at time = t + N the perceptual masking created by the event boundary in the waveform for 0 4 allows for the object to be removed from the cluster. Note that object 0 4 may have exceeded the spatial error threshold between t < time < t + N, but because an auditory event was not detected the object remained in object cluster A. In this case, the clustering process will cause object 0 4 to be removed (de-clustered) from cluster A. As shown in FIG. 6C, the removal of object O 4 from object cluster A results in the new object clustering organization at time = t + N + 1 . At this time object 0 4 may reside as a single object that is rendered or it may be integrated into another object cluster if a suitable cluster is available.

[0095] In an adaptive audio system, certain objects may be defined as fixed objects, such as channel beds that are associated with specific speaker feeds. In an embodiment, the clustering process accounts for bed plus dynamic object interaction, such that when an object creates too much error when being grouped with a clustered object (e.g., it is an outlying object), it is instead mixed to a bed. FIG. 7 is a flowchart that illustrates a method of clustering objects and beds, under an embodiment. The method 700 shown in FIG. 7, it is assumed that beds are defined as fixed position objects. Outlying objects are then clustered (mixed) with one or more appropriate beds if the object is above an error threshold for clustering with other objects, act 702. The bed channel(s) are then labeled with the object information after clustering, act 704. The process then renders the audio to more channels and clusters additional channels as objects, act 706, and performs dynamic range management on downmix or smart downmix to avoid artifacts/ decorrelation, phase distortion, and the like, act 708. In act 71 0 the process performs a two-pass culling/clustering process. In an embodiment, this involves keeping the N most salient objects separate, and clustering the remaining objects. Thus, in act 712, the process clusters only less salient objects to groups or fixed beds. Fixed beds could be added to a moving object or clustered object, which may be more suitable for particular endpoint devices, such as headphone virtualization. The object width may be used as a characteristic of how many and which objects are clustered together and where they will be spatially rendered following clustering.

[0096] In an embodiment, object signal-based saliency is the difference between the average spectrum of the mix and spectrum of each object and saliency metadata elements may be added to objects/clusters. The relative loudness is a percentage of the energy/loudness contributed by each object to the final mix. A relative loudness metadata element can also be added to objects/clusters. The process can then sort by saliency to cull masked sources and/or preserve most important sources.

Clusters can be simplified by further attenuating low importance/low saliency sources.

[0097] The clustering process is generally used as a means for data rate reduction prior to audio coding. In an embodiment, object clustering/grouping is used during decoding based on the end-point device rendering capabilities. Various different end-point devices may be used in conjunction with a rendering system that employs a clustering process as described herein, such as anything from full cinema playback environment, home theater system, gaming system and personal portable device, and headphone system. Thus, the same clustering techniques may be utilized while decoding the objects and beds in a device, such as a Blu-ray player, prior to rendering in order that the capabilities of the renderer will not be exceeded. In general, rendering of the object and bed audio format requires that each object be rendered to some set of channels associated with the renderer as a function of each object's spatial information. The computational cost of this rendering scales with the number of objects, and therefore any rendering device will have some maximum number of objects it can render that is a function of its computational capabilities. A high-end renderer, such as an AVR, may contain an advanced processor that can render a large number of objects simultaneously. A less expensive device, such as a home theater in a box (HTIB) or a soundbar, may be able to render fewer objects due to a more limited processor. It is therefore advantageous for the renderer to communicate to the decoder the maximum number of objects and beds that it can accept. If this number is smaller than the number of objects and beds contained in the decoded audio, then the decoder may apply clustering of object and beds prior to transmission to the renderer so as to reduce the total to the communicated maximum. This communication of capabilities may occur between separate decoding and rendering software components within a single device, such as an HTIB containing an internal Blu-ray player, or over a communications link, such as HDMI, between two separate devices, such as a stand-alone Blu-ray player and an AVR. The metadata associated with objects and clusters may indicate or provide information as to optimally reduce the number of clusters by the renderer, by enumerating the order of importance, signaling the (relative) importance of clusters, or specify which clusters should be combined sequentially to reduce the overall number of clusters that should be rendered. This is described later with reference to FIG. 15.

[0098] In some embodiments, the clustering process may be performed in the decoder stage 206 with no additional information other than that inherent to each object. However, the computational cost of this clustering may be equal to or greater than the rendering cost that it is attempting to save. A more computationally efficient embodiment involves computing a hierarchical clustering scheme at the encode side 204, where computational resources may be much greater, and sending the metadata along with the encoded bitstream which instructs the decoder how to cluster objects and beds into progressively smaller numbers. For example, the metadata may state: first merge object 2 with object 10. Second merge the resulting object with object 5, and so on.

[0099] In an embodiment, objects may have one or more time varying labels associated with them to denote certain properties of the audio contained in the object track. As described above, an object may be categorized into one of several discreet content types, such as dialog, music, effects, background, etc., and these types may be used to help guide the clustering. At the same time, these categories may also be useful during the rendering process. For example, a dialog

enhancement algorithm might be applied only to objects labeled as dialog. When objects are clustered however, the cluster might be comprised of objects with different labels. In order to label the cluster, several techniques may be employed. A single label for the cluster may be chosen, for example, by selecting the label of the object with the largest amount of energy. This selection may also be time varying, where a single label is chosen at regular intervals of time during the cluster's duration, and at each particular interval the label is chosen from the object with the largest energy within that particular interval. In some cases, a single label may not be sufficient, and a new, combined label may be generated. For example, at regular intervals, the labels of all objects contributing to the cluster during that interval may be associated with the cluster. Alternatively, a weight may be associated with each of these contributing labels. For example, the weight may be set equal to the percentage of overall energy belonging to that particular type: for example, 50% dialog, 30% music, and 20% effects. Such labeling may then be used by then renderer in a more flexible manner. For example, a dialog enhancement algorithm may only be applied to clustered object tracks containing at least 50% dialog.

[00100] Once the clusters that combine different objects have been defined, equivalent audio data must be generated for each cluster. In an embodiment, the combined audio data is simply the sum of the original audio content for each original object in the cluster, as shown in FIG. 3A. However, this simple technique may lead to digital clipping. To mitigate this possibility, several different techniques can be employed. For example, if the renderer supports floating audio data, then high dynamic range information can be stored and passed on to the renderer to be used in a later processing stage. If only limited dynamic range is available, then it is desirable to either limit the resulting signal or attenuate it by some amount, which can be either fixed or dynamic. In this latter case, the attenuation coefficient will be carried into the object data as a dynamic gain. In some cases, direct summation of the constituent signals can lead to comb-filtering artifacts. This problem can be mitigated by applying decorrelation filters, or similar processes, prior to summation. Another method to mitigate timbre changes due to downmixing is to use the phase alignment of object signals before summation. Yet another method to resolve comb- filtering or timbre changes is to re-enforce amplitude or power complimentary summation by applying frequency-dependent weights to the summed audio signal, in response to the spectrum of the summed signal and the spectra of the individual object signals.

[00101] When generating a downmix, the process can further reduce the bit depth of a cluster to increase the compression of data. This can be performed through a noise-shaping, or similar process. A bit depth reduction generates a cluster that has a fewer number of bits than the constituent objects. For example, one or more 24- bit objects can be grouped into a cluster that is represented as 16 or 20-bits. Different bit reduction schemes may be used for different clusters and objects depending on the cluster importance or energy, or other factors. Additionally, when generating a downmix, the resulting downmix signal may have sample values beyond the acceptable range that can be represented by digital representations with a fixed number of bits. In such case, the downmix signal may be limited using a peak limiter, or (temporarily) attenuated by a certain amount to prevent out-of-range sample values. The amount of attenuation applied may be included in the cluster metadata so that it can be un-done (or inverted) during rendering, coding, or other subsequent process.

[00102] In an embodiment, the clustering process may employ a pointer mechanism whereby the metadata includes pointers to specific audio waveforms that are stored in a database or other storage. Clustering of objects is performed by pointing to appropriate waveforms by combined metadata elements. Such as system can be implemented in an archive system that generates a precomputed database of audio content, transmits the audio waveforms from the coder and decoder stages and then constructs the clusters in the decode stage using pointers to specific audio waveforms for the clustered objects. This type of mechanism can be used in a system that facilitates packaging of object-based audio for different end- point devices.

[00103] The clustering process can also be adapted to allow for re-clustering on the end-point client device. Generally substitute clusters replace original objects, however, for this embodiment, the clustering process also sends error information associated with each object to allow the client to determine whether or not an object is an individually rendered object or a clustered object. If the error value is 0, then it can be deduced that there was no clustering. If, however, the error value equals some amount, then it can be deduced that the object is the result of some clustering. Rendering decisions at the client can then be based on the amount of error.

In general, the clustering process is run as an off-line process. Alternatively, it may be run as a live process as the content is created. For this embodiment, the clustering component may be implemented as a tool or application that may be provided as part of the content creation and/or rendering system.

Perceptual-Based Clustering

[00104] In an embodiment, a clustering method is configured to combine object and/or bed channels in constrained conditions, e.g., in which the input objects cannot be clustered without violating a spatial error criterion, due to the large number of objects and/or their spatially sparse distribution. In such conditions, the clustering process is not only controlled by spatial proximity (derived from metadata), but is augmented by perceptual criteria derived the corresponding audio signals. More specifically, objects with a high (perceived) importance in the content will be favored over objects with low importance in terms of minimizing spatial errors. Examples of quantifying importance include, but are not limited to partial loudness and semantics (content type).

[00105] FIG. 8 illustrates a system for clustering objects and bed channels into clusters based on perceptual importance in addition to spatial proximity, under an embodiment. As shown in FIG. 8, system 360 comprises a pre-processing unit 366, a perceptual importance component 376, and a clustering component 384. Channel beds and/or objects 364 along with associated metadata 362 are input to the preprocessing unit 366 and processed to determine their relative perceptual importance and then clustered with other beds/objects to produce output beds and/or clusters of objects (which may consist of single objects or sets of objects) 392 along with the associated metadata 390 for these clusters. In an example embodiment or implementation, the input may consist of 1 1 .1 bed channels and 1 28 or more audio objects, and the output may comprise a set of beds and clusters that comprise on the order of 1 1 -1 5 signals in total with associated metadata for each cluster, though embodiments are not so limited. The metadata may include information that specifies object position, size, zone masks, decorrelator flags, snap flag, and so on.

[00106] The preprocessing unit 366 may include individual functional components such as a metadata processor 368, an object decorrelation unit 370, an offline processing unit 372, and a signal segmentation unit 374, among other components. External data, such as a metadata output update rate 396 may be provided to the preprocessor 366. The perceptual importance component 376 comprises a centroid initialization component 378, a partial loudness component 380, and a media intelligence unit 382, among other components. External data, such as an output beds and objects configuration data 398 may be provided to the perceptual importance component 376. The clustering component 384 comprises signal merging 386 and metadata merging 388 components that form the clustered beds/objects to produce the metadata 390 and clusters 392 for the combined bed channels and objects. [00107] With regard to partial loudness, the perceived loudness of an object is usually reduced in the context of other objects. For example, objects may be (partially) masked by other objects and/or bed channels present in the scene. In an embodiment, objects with a high partial loudness are favored over objects with a low partial loudness in terms of spatial error minimization. Thus, relatively unmasked (i.e., perceptually louder) objects are less likely to be clustered while relatively masked objects are more likely to be clustered. This process preferably includes spatial aspects of masking, e.g., the release from masking if a masked object and a masking object have different spatial attributes. In other words, the loudness-based importance of a certain object of interest is higher when that object is spatially separated from other objects compared to when other objects are in the direct vicinity of the object of interest.

[00108] In an embodiment, the partial loudness of an object comprises the specific loudness extended with spatial unmasking phenomena. A binaural release from masking is introduced to represent the amount of masking based on the spatial distance between two objects, as provided in the equation below.

f k (b) = (A + ∑E m (b)) a + (A + ∑E m (b)^ -f(k, m))) a

[00109] In the above equation, the first summation is performed over all m, and the second summation is performed for all m≠ k. The term E m {b) represents the excitation of object m, the term A reflects the absolute hearing threshold, and the term (1 -/(/ , m)) represents the release from masking. Further details regarding this equation are provided in the discussion below.

[00110] With regard to content semantics or audio type, dialogue is often considered to be more important (or draws more attention) than background music, ambience, effects, or other types of content. The importance of an object is therefore dependent on its (signal) content, and relatively unimportant objects are more likely to be clustered than important objects.

[00111] The perceptual importance of an object can be derived by combining the perceived loudness and content importance of the objects. For example, in an embodiment, content importance can be derived based on a dialog confidence score, and a gain value (in dB) can be estimated based on this derived content importance. The loudness or excitation of the object can then be modified by the estimated loudness, with the modified loudness representing the final perceptual importance of the object.

[00112] FIG. 9 illustrates functional components of an object clustering process using perceptual importance, under an embodiment. As shown in diagram 900, input audio objects 902 are combined into output clusters 910 through a clustering process 904. The clustering process 904 clusters the objects 902, at least in part, based on importance metrics 908 that are generated from the object signals and optionally their parametric object descriptions. These object signals and parametric object descriptions are input to an estimate importance 906 function, which generates the importance metrics 908 for use by the clustering process 904. The output clusters 910 constitute a more compact representation (e.g., a smaller number of audio channels) than the original input object configuration, thus allowing for reduced storage and transmission requirements; and reduced computational and memory requirements for reproduction of the content, especially on consumer- domain devices with limited processing capabilities and/or that operate on batteries.

[00113] In an embodiment, the estimate importance 906 and clustering 904 processes are performed as a function of time. For this embodiment, the audio signals of the input objects 900 are segmented into individual frames that are subjected to certain analysis components. Such segmentation may be applied on time-domain waveforms, but also using filter banks, or any other transform domain. The estimate importance function 906 operates on one or more characteristics of the input audio objects 902 including content type and partial loudness.

[00114] FIG. 1 1 is a flowchart illustrating an overall method of processing audio objects based on the perceptual factors of content type and loudness, under an embodiment. The overall acts of method 1 100 include estimating the content type of an input object (1 102), and then estimating the importance of the content-based object (1 1 04). The partial loudness of the object is calculated as shown in block 1 106. The partial loudness can be computed in parallel with the content

classification, or even before or after the content classification, depending on system configuration. The loudness measure and content analysis can then be combined (1 108) to derive an overall importance based on loudness and content. This may be done by modifying the calculated loudness of an object by the probability of that object being perceptually important due to content. Once the combined object importance is determined, the object can be clustered with other objects or left unclustered depending on certain clustering processes. To prevent undue clustering and unclustering of objects based on loudness, a smoothing operation may be used to smooth the loudness based on content importance (1 1 10). With regard to loudness smoothing, a time constant is selected based on the relative importance of an object. For important objects, a large time constant that smoothes slowly can be selected so that important objects can be consistently selected as the cluster centroid. An adaptive time constant may also be used based on the content importance. The smoothed loudness and content importance of the object is then used to form the appropriate output clusters (1 1 12). Aspects of each of the main process acts illustrated in method 600 are described in greater detail below. It should be noted that depending on system constraints and application requirements, certain acts of process 1 100 may be omitted, if necessary, such as in a basic system that perhaps bases perceptual importance on only one of content type or partial loudness, or one that does not require loudness smoothing.

[00115] With regard to estimating the object content type (1 102), the content type (e.g., dialog, music, and sound effects) provides critical information to indicate the importance of an audio object. For example, dialog is usually the most important component in a movie since it conveys the story, and proper playback typically requires not allowing the dialog to move around with other moving audio objects. The estimate importance function 906 in FIG. 9 includes an audio classification component that automatically estimates the content type of an audio object to determine whether or not the audio object is dialog, or some other type of important or unimportant type of object.

[00116] FIG. 10 is a functional diagram of an audio classification component, under an embodiment. As shown in diagram 1000, an input audio signal 1002 is processed in a feature extraction module that extracts features representing the temporal, spectral, and/or spatial property of the input audio signal. A set of pre- trained models 1 006 representing the statistical property of each target audio type is also provided. For the example of FIG. 10, the models include dialog, music, sound effects, and noise, though other models are also possible, and various machine learning techniques can be applied for model training. The model information 1 006 and extracted features 1004 are input to a model comparison module 1 008. This module 1008 compares the features of the input audio signal with the model of each target audio type, computes the confidence score of each target audio type, and estimates the best matched audio types. A confidence score for each target audio type is further estimated, representing the probability or the matched level between the to-be-identified audio object and the target audio type, with values from 0 to 1 (or any other appropriate range). The confidence scores can be computed depending on different machine learning methods, for example, the posterior probability can be directly used as a confidence score for Gaussian Mixture Model (GMM), and sigmoid fitting can be used to approximate confidence score for Support Vector Machine (SVM) and AdaBoost. Other similar machine learning methods can also be used. The output 1010 of the model comparison module 1 008 comprises the audio type or types and their associated confidence score(s) for the input audio signal 1002.

[00117] With regard to estimating content-based audio object importance, for dialog oriented applications, the content-based audio object importance is computed based on the dialog confidence score only, assuming that dialog is the most important component in audio as stated above. In other applications, different content types confidence scores may be used, depending on the preferred type of content. In one embodiment, a sigmoid function is utilized, as provided in the following equation:

1

Ik ~ l + e Ap k +B

[00118] In the above equation, l k is the estimated content-based importance of object k, p k is the corresponding estimated probability of object k consisting of speech/dialogue, and A and B are two parameters.

[00119] In order to further set the content-based importance to consistently close to 0 for those with dialog probability scores less than a threshold c, the above formula can be modified as follows:

/ 1

[00120] In an embodiment, the constant c, can take the value of c = 0.1 , and the two parameters A and B can be either constants or adaptively tuned based on the probability score p k .

[00121] With regard to calculating object partial loudness, one method to calculate partial loudness of one object in a complex auditory scene is based on the calculation of excitation levels E(b) in critical bands (b). The excitation level for a certain object of interest E obj {b) and the excitation of all remaining (masking) signals Enoise(b) results in a specific loudness N'(b) in band b, as provided in the following equation:

N'(b) = C[(GE obj + GE noise + A) a - A a ] - C[(GE noise + A) a - A a ],

with G, C, A and□ model parameters. Subsequently, the partial loudness N is obtained by summing the specific loudness M(b) across critical bands as follows:

N =∑ b N'(b)

[00122] When an auditory scene consists of ^objects (k= 1,...,K) with excitation levels E k (b), and for simplicity of notation, model parameters G and C are assumed to be equal to +1 , the specific loudness N' k (b) of object k is given by:

N' k (b) = (A +∑ m E m (b)) a - ( ~ E k (b) + A +∑ m E m (b)) a

[00123] The first term in the equation above represents the overall excitation of the auditory scene, plus an excitation A that reflects the absolute hearing threshold. The second term reflects the overall excitation except for the object of interest k, and hence the second term can be interpreted as a 'masking' term that applies to object k. This formulation does not account for a binaural release from masking. A release from masking can be incorporated by reducing the masking term above if the object of interest k is distant from another object m as given by the following equation:

N' k (b) = (A +∑ m E m (b)) a - (-E k (b) + A +∑ m E m (b) (l - f{k, m))Y,

[00124] In the above equation, f(k,m) is a function that equals 0 if object / and object m have the same position, and a value that is increasing to +1 with increasing spatial distance between objects / and m. Said differently, the function f(k,m) represents the amount of unmasking as a function of the distance in parametric positions of objects / and m. Alternatively, the maximum value of f(k,m) may be limited to a value slightly smaller than +1 such as 0.995 to reflect an upper limit in the amount of spatial unmasking for objects that are spatially separated.

[00125] The calculation of loudness can be accounted for by a defined cluster centroid. In general, a centroid is the location in attribute space that represents the center of a cluster, and an attribute is a set of values corresponding to a

measurement (e.g., loudness, content type, etc.). The partial loudness of individual objects is only of limited relevance if objects are clustered, and if the goal is to derive a constrained set of clusters and associated parametric positions that provides the best possible audio quality. In an embodiment, a more representative metric is the partial loudness accounted for by a specific cluster position (or centroid), aggregating all excitation in the vicinity of that position. Similar to the case above, the partial loudness accounted for by cluster centroid c can be expressed as follows:

N' c (b) = (A +∑ m E m (b)) a - (A +∑ m E m (b) (l - f (m, c))r

[00126] In this context, an output bed channel (e.g., an output channel that should be reproduced by a specific loudspeaker in a playback system) can be regarded as a centroid with a fixed position, corresponding to the position of the target loudspeaker. Similarly, input bed signals can be regarded as objects with a position corresponding to the position of the corresponding reproduction loudspeaker. Hence objects and bed channels can be subjected to the exact same analysis, under the constraint that bed channel positions are fixed.

[00127] In an embodiment, the loudness and content analysis data are combined to derive a combined object importance value, as shown in block 1 108 of FIG. 1 1 . This combined value based on partial loudness and content analysis can be obtained by modifying the loudness and/or excitation of an object by the probability of that object being perceptually important. For example, the excitation of object k can be modified as follows:

E' k (b) = E k (b)g(l k )

[00128] In the above equation, l k is the content-based object importance of object k, E (b) is the modified excitation level, and g(.) is a function to map the content importance into excitation level modifications. In an embodiment, g(.) is an exponential function interpreting the content importance as a gain in db.

g(l k ) = 10 Gl k

where G is another gain over the content-based object importance, which can be tuned to obtain the best performance.

[00129] In another implementation, g(.) is a linear function, as follows:

g(l k ) = 1 + G-I k

[00130] The above equations are merely examples of possible embodiments.

Alternative methods can be applied onto loudness instead of excitation, and may include ways of combining information other than involving a simple product.

[00131] As also shown in FIG. 1 1 , embodiments also include a method of smoothing loudness based on content importance (1 1 10). Loudness is usually smoothed over frames to avoid rapid change of object position. The time constant of the smoothing process can be adaptively adjusted based on the content importance.

In this manner, for more important objects, the time constant can be larger (smoothing slowly) so that the more important objects can be consistently selected as the cluster centroid over frames. This is also improves the stability of centroid selection for dialog, since a dialog usually alternates spoken words and pauses, in which the loudness may be low at pauses, thus causing other objects to be selected as the centroid. This results in the finally selected centroids to switch between dialog and other objects, thus causing potential instability.

[00132] In one embodiment, the time constant is positively correlated to the content-based object importance, as follows: [00133] In the above equation, τ is the estimated importance dependent time constant, and τ 0 and τι are parameters. Moreover, similar to the excitation/loudness level modification based on content importance, the adaptive time constant scheme can be also applied onto either loudness or excitation.

[00134] As stated above, the partial loudness of audio objects is calculated with respect to a defined cluster centroid. In an embodiment, a cluster centroid calculation is performed such that when the total number of clusters is constrained, a subset of cluster centroids is selected that accounts for the maximum partial loudness of the centroids. FIG. 12 is a flowchart that illustrates a process of calculating cluster centroids and allocating objects to selected centroids, under an embodiment. Process 1200 illustrates an embodiment of deriving a limited set of centroids based on object loudness values. The process begins by defining the maximum number of centroids in the limited set (1201 ). This constrains the clustering of audio objects so that certain criteria, such as spatial error, are not violated. For each audio object, the process computes the loudness accounted for given a centroid at the position of that object (1202). The process then selects the centroid that accounts for maximum loudness, optionally modified for content type (1204), and removes all excitation accounted for by the selected centroid (1 206). This process is repeated until the maximum number of centroids defined in block 1201 is obtained, as determined in decision block 1208.

[00135] In an alternative embodiment, the loudness processing could involve performing a loudness analysis on a sampling of all possible positions in the spatial domain, followed by selecting local maxima across all positions. In a further alternative embodiment, Hochbaum centroid selection is augmented with loudness. The Hochbaum centroid selection is based on the selection of a set of positions that have maximum distance with respect to one another. This process can be augmented by multiplying or adding loudness to the distance metric to select centroids.

[00136] As shown in FIG. 12, once the maximum number of centroids has been processed, the audio objects are allocated to appropriate selected centroids (1 210). Under this method, when a proper subset of cluster centroids is selected, objects can be allocated to centroids by either adding the object to its closest neighboring centroid, or mixing the object into a set or subset of centroids, for example by means of triangulation, using vector decomposition, or any other means to minimize the spatial error of the object.

[00137] FIGS. 13A and 13B illustrate the grouping of objects into clusters based on certain perceptual criteria, under an embodiment. Diagram 1300 illustrates the position of different objects in two-dimensional object space represented as an X/Y spatial coordinate system. The relative size of the objects represents their relative perceptual importance so that larger objects (e.g., 1 306) are of higher importance than smaller objects (e.g., 1304). In an embodiment, the perceptual importance is based on the relative partial loudness values and content type of each respective object. The clustering process analyzes the objects to form clusters (groups of objects) that tolerate more spatial error, wherein the spatial error may be defined in relation to a maximum error threshold value 1302. Based on appropriate criteria, such as the error threshold, a maximum number of clusters, and other similar criteria, the objects may be clustered in any number of arrangements.

[00138] FIG. 13B illustrates a possible clustering of the objects of FIG. 13A for a particular set of clustering criteria. Diagram 1350 illustrates the clustering of the seven objects in diagram 1300 into four separate clusters, denoted clusters A-D. For the example shown in FIG. 13B, cluster A represents a combination of low

importance objects that tolerate more spatial error; clusters C and D represent clusters based on sources that are of high enough importance that they should be rendered separately; and cluster B represents a case where a low importance object can be grouped high importance object. The configuration of FIG. 13B is intended to represent just one example of a possible clustering scheme for the objects of FIG. 13A, and many different clustering arrangements can be selected. [00139] In an embodiment, the clustering process select n centroids within the X/Y plane for clustering the objects, where n is the number of clusters. The process selects the n centroids that correspond to the highest importance, or maximum loudness accounted for. The remaining objects are then clustered according to (1 ) nearest neighbor, or (2) rendered into the cluster centroids by panning techniques. Thus, audio objects can be allocated to clusters by adding the object signal of a clustered object to the closest centroid, or mixing the object signal into a (sub)set of clusters. The number of selected clusters may be dynamic and determined through mixing gains that minimize the spatial error in a cluster. The cluster metadata consists of weighted averages of the objects that reside in the cluster. The weights may be based on the perceived loudness, as well as object position, size, zone, exclusion mask, and other object characteristics. In general, clustering of objects is primarily dependent on object importance and one or more objects may be distributed over multiple output clusters. That is, an object may be added to one cluster (uniquely clustered), or it may be distributed over more than one cluster (non- uniquely clustered).

[00140] As shown in FIGS. 13A and 13B, the clustering process dynamically groups an original number of audio objects and/or bed channels into a target number of new equivalent objects and bed channels. In most practical applications, the target number is substantially lower than the original number, e.g., 1 00 original input tracks combined into 20 or fewer combined groups. These solutions apply to scenarios where both bed and object channels are available either as an input and/or an output to the clustering process. A first solution to support both objects and bed tracks is to process input bed tracks as objects with fixed pre-defined position in space. This allows the system to simplify a scene comprising, for example, both objects and beds into a target number of object tracks only. However, it might also be desirable to preserve a number of output bed tracks as part of the clustering process. Less important objects can then be rendered directly to the bed tracks as a pre-process, while the most important ones can be further clustered into a smaller target number of equivalent object tracks. If some of the resulting clusters have high distortion they can also be rendered to beds as a post-process, as this may result in a better approximation of the original content. This decision can be made on a time-varying basis, since the error/distortion is a time-varying function. [00141] In an embodiment, the clustering process involves analyzing the audio content of every individual input track (object or bed) as well as the attached metadata (e.g., the spatial position of the objects) to derive an equivalent number of output object/bed tracks that minimizes a given error metric. In a basic

implementation, the error metric 1302 is based on the spatial distortion due to shifting the clustered objects and can further be weighted by a measure of the importance of each object over time. The importance of an object can encapsulate other characteristics of the object, such as loudness, content type, and other relevant factors. Alternatively, these other factors can form separate error metrics that can be combined with the spatial error metric.

Object and Channel Processing

[00142] In an adaptive audio system, certain objects may be defined as fixed objects, such as channel beds that are associated with specific speaker feeds. In an embodiment, the clustering process accounts for bed plus dynamic object interaction, such that when an object creates too much error when being grouped with a clustered object (e.g., it is an outlying object), it is instead mixed to a bed. FIG. 14 illustrates components of a process flow for clustering audio objects and channel beds, under an embodiment. The method 1400 shown in FIG. 14, it is assumed that beds are defined as fixed position objects. Outlying objects are then clustered (mixed) with one or more appropriate beds if the object is above an error threshold for clustering with other objects (1402). The bed channel(s) are then labeled with the object information after clustering (1404). The process then renders the audio to more channels and clusters additional channels as objects (1406), and performs dynamic range management on downmix or smart downmix to avoid artifacts/ decorrelation, phase distortion, and the like (1408). The process performs a two-pass culling/clustering process (1410). In an embodiment, this involves keeping the N most salient objects separate, and clustering the remaining objects. Thus, the process clusters only less salient objects to groups or fixed beds (1412). Fixed beds can be added to a moving object or a clustered object, which may be more suitable for particular endpoint devices, such as headphone virtualization. The object width may be used as a characteristic of how many and which objects are clustered together and where they will be spatially rendered following clustering. Playback System [00143] As described above, various different end-point devices may be used in conjunction with a rendering system that employs a clustering process as described herein, and such devices may have certain capabilities that may impact the clustering process. FIG. 15 illustrates rendering clustered object data based on end- point device capabilities, under an embodiment. As shown in diagram 1500, a Blu- ray disc decoder 1502 produces simplified audio scene content comprising clustered beds and objects for rendering through a soundbar, home theater system, personal playback device, or some other limited processing playback system 1504. The characteristics and capabilities of the end-point device is transmitted as renderer capability information 1508 back to the decoder stage 1502 so that the clustering of objects can be performed optimally based on the specific end-point device being used.

[00144] The adaptive audio system employing aspects of the clustering process may comprise a playback system that is configured render and playback audio content that is generated through one or more capture, pre-processing, authoring and coding components. An adaptive audio pre-processor may include source separation and content type detection functionality that automatically generates appropriate metadata through analysis of input audio. For example, positional metadata may be derived from a multi-channel recording through an analysis of the relative levels of correlated input between channel pairs. Detection of content type, such as speech or music, may be achieved, for example, by feature extraction and classification. Certain authoring tools allow the authoring of audio programs by optimizing the input and codification of the sound engineer's creative intent allowing him to create the final audio mix once that is optimized for playback in practically any playback environment. This can be accomplished through the use of audio objects and positional data that is associated and encoded with the original audio content. In order to accurately place sounds around an auditorium, the sound engineer needs control over how the sound will ultimately be rendered based on the actual constraints and features of the playback environment. The adaptive audio system provides this control by allowing the sound engineer to change how the audio content is designed and mixed through the use of audio objects and positional data. Once the adaptive audio content has been authored and coded in the appropriate codec devices, it is decoded and rendered in the various components of the playback system. [00145] In general, the playback system may be any professional or consumer audio system, which may include home theater (e.g., A/V receiver, soundbar, and Blu-ray), E-media (e.g., PC, Tablet, Mobile including headphone playback), broadcast (e.g., TV and set-top box), music, gaming, live sound, user generated content, and so on. The adaptive audio content provides enhanced immersion for the consumer audience for all end-point devices, expanded artistic control for audio content creators, improved content dependent (descriptive) metadata for improved rendering, expanded flexibility and scalability for consumer playback systems, timbre preservation and matching, and the opportunity for dynamic rendering of content based on user position and interaction. The system includes several components including new mixing tools for content creators, updated and new packaging and coding tools for distribution and playback, in-home dynamic mixing and rendering (appropriate for different consumer configurations), additional speaker locations and designs

[00146] Aspects of the audio environment of described herein represents the playback of the audio or audio/visual content through appropriate speakers and playback devices, and may represent any environment in which a listener is experiencing playback of the captured content, such as a cinema, concert hall, outdoor theater, a home or room, listening booth, car, game console, headphone or headset system, public address (PA) system, or any other playback environment. The spatial audio content comprising object-based audio and channel-based audio may be used in conjunction with any related content (associated audio, video, graphic, etc.), or it may constitute standalone audio content. The playback environment may be any appropriate listening environment from headphones or near field monitors to small or large rooms, cars, open air arenas, concert halls, and so on.

[00147] Aspects of the systems described herein may be implemented in an appropriate computer-based sound processing network environment for processing digital or digitized audio files. Portions of the adaptive audio system may include one or more networks that comprise any desired number of individual machines, including one or more routers (not shown) that serve to buffer and route the data transmitted among the computers. Such a network may be built on various different network protocols, and may be the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), or any combination thereof. In an embodiment in which the network comprises the Internet, one or more machines may be configured to access the Internet through web browser programs.

[00148] One or more of the components, blocks, processes or other functional components may be implemented through a computer program that controls execution of a processor-based computing device of the system. It should also be noted that the various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics.

Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, physical (non-transitory), non-volatile storage media in various forms, such as optical, magnetic or semiconductor storage media.

[00149] Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise," "comprising," and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of "including, but not limited to." Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words "herein," "hereunder," "above," "below," and words of similar import refer to this application as a whole and not to any particular portions of this application.

When the word "or" is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.

[00150] While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.