Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUDIO DIRECTIVITY CODING
Document Type and Number:
WIPO Patent Application WO/2022/248632
Kind Code:
A1
Abstract:
The application discloses techniques for compressively encoding and decoding an audio signal representing a directivity pattern, the audio values having different values according to different discrete positions defined on an unit sphere. The audio signal values are encoded in a bitstream as prediction residual values. The prediction residual values being used in sequences to obtained predicted audio signal values by moving on positions defined on parallel lines, parallel to an equator of the sphere, the parallel lines defined from a first pole toward a second pole of the sphere. The predicted values are obtained based on an initial prediction sequence, on adjacent discrete positions preceding a given position or interpolated versions of the audio values of a previously predicted adjacent parallel line.

Inventors:
HERRE JÜRGEN (DE)
GHIDO FLORIN (DE)
Application Number:
PCT/EP2022/064343
Publication Date:
December 01, 2022
Filing Date:
May 25, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FRAUNHOFER GES FORSCHUNG (DE)
International Classes:
G10L19/008; G10L19/04; G10L19/02
Domestic Patent References:
WO2021001358A12021-01-07
Foreign References:
US20110249821A12011-10-13
Other References:
MAJDAK PIOTR ET AL: "Spatially Oriented Format for Acoustics: A Data Exchange Format Representing Head-Related Transfer Functions", AES CONVENTION 134; 20130501, AES, 60 EAST 42ND STREET, ROOM 2520 NEW YORK 10165-2520, USA, 4 May 2013 (2013-05-04), XP040575102
"MPEG-I 6DoF Audio Encoder Input Format", no. n19211, 25 April 2020 (2020-04-25), XP030285463, Retrieved from the Internet [retrieved on 20200425]
PIOTR MAJDAK ET AL.: "Spatially Oriented Format for Acoustics: A Data Exchange Format Representing Head-Related Transfer Functions", 134TH CONVENTION OF THE AUDIO ENGINEERING SOCIETY, May 2013 (2013-05-01)
FRANK WEFERS: "OpenDAFF: A free, open-source software package for directional audio data", DAGA, March 2010 (2010-03-01)
Attorney, Agent or Firm:
ZINKLER, Franz et al. (DE)
Download PDF:
Claims:
    Claims 1. An apparatus (200, 200a) for decoding an audio signal encoded in a bitstream (104), the audio signal having different audio values according to different directions, the directions being associated with discrete positions in a unit sphere (1), the discrete positions in the unit  sphere (1) being displaced according to parallel lines from an equatorial line towards a first pole (2) from the equatorial line towards a second pole (4), the apparatus comprising: a bitstream reader (130) configured to read prediction residual values of the encoded audio signal from the bitstream (104); a prediction section (210’) configured to obtain the audio signal (101, 102) by predic-  tion and from prediction residual values of the encoded audio signal (104), the prediction section (210’) using a plurality of prediction sequences (10, 20, 30, 40) including: at least one initial prediction sequence (10, 20), along a line of adjacent dis- crete positions (10), predicting audio values based on the audio values of the immedi- ately preceding audio values in the same initial predictions sequence (10); and  at least one subsequent prediction sequence (30, 40), divided among a plural- ity of subsequences (31, 32, 33), each subsequence (31, 32, 33) moving along a par- allel line and being adjacent to a previously predicted parallel line, and being such that audio values along a parallel line being processed are predicted based on at least:   audio values of the adjacent discrete positions in the same subse- quence (31, 32, 33); and interpolated versions (31’) of the audio values of the previously pre- dicted adjacent parallel line, each interpolated version (31’) of the adjacent previously predicted parallel line having the same number of discrete positions  of the parallel line being processed. 2. The apparatus of claim 1, wherein the at least one initial prediction sequence includes a meridian initial prediction sequence (10) along a meridian line of the unit sphere (1), wherein at least one of the plurality of subsequences (31, 32, 33) starts from a dis-  crete position (31a, 32a, 33a) of the already predicted at least one meridian initial prediction sequence (10). 3. The apparatus of claim 2, wherein the at least one initial prediction sequence includes an equatorial initial prediction sequence (20), along the equatorial line of the unit sphere (1),  to be performed after the meridian initial prediction sequence (10), the equatorial initial pre- diction sequence (20) starting from a discrete position (20a) of the of the already predicted at least one meridian initial prediction sequence (10).     4. The apparatus of claim 3, wherein a first subsequence (31) of the plurality of subse- quences is performed along a parallel line adjacent to the equatorial line, and the further sub- sequences (32, 33) of the plurality of subsequences are performed in a succession towards a  pole (4). 5 The apparatus of any of the preceding claims, wherein the prediction section (220’) is configured, in at least one initial prediction sequence (10, 20), to predict at least one audio value (601, 701) by linear prediction from one already predicted single audio value in an ad-  jacent discrete position (602, 702). 6. The apparatus of calm 5, wherein the linear prediction is, in at least one of the predic- tion sequences or in at least one subsequence, an identity prediction, so that the predicted audio value is the same of the single audio value in the adjacent discrete position.   7. The apparatus of any of the preceding claims, wherein the prediction section (120) is configured, in at least one initial prediction sequence (10, 20), to predict at least one audio value (601, 701) by prediction from only one already predicted audio value in a first adjacent discrete position (602, 702) and one already predicted audio value in a second discrete posi-  tion (605, 705) adjacent to the first adjacent discrete position. 8. The apparatus of claim 7, wherein the prediction is linear. 9. The apparatus of claim 7 or 8, wherein the prediction is so that the already predicted  audio value in the first adjacent discrete position (601, 701) is weighted at least twice as much as the already predicted audio value in the second discrete position (605, 705) adja- cent to the first adjacent discrete position (601, 701). 10. The apparatus of any of the preceding claims, wherein the prediction section (210’) is  configured, in at least one subsequence (31, 32, 33), to predict at least one audio value (501) based on: the immediately preceding audio value in the adjacent discrete position (502) in the same subsequence (32); and at least one first interpolated audio value in an adjacent position (503) in the interpo-  lated version (31’) of the previously predicted parallel line (31).     11. The apparatus of claim 10, wherein the prediction section (210’) is configured, in at least one subsequence (31, 32, 33), to predict at least one audio value also based on: at least one second interpolated audio value in a position (506) adjacent to the posi- tion of the first interpolated audio value (503) and adjacent to the adjacent discrete position  (502) in the same subsequence. 12. The apparatus of claim 11, wherein, in the interpolation, a same weight is given to: the first interpolated audio value in the adjacent position (503) in the interpolated ver- sion (31’) of the previously predicted parallel line (31); and   the at least one second interpolated audio value in the position (506) adjacent to the position (503) of the first interpolated audio value and adjacent to the previously predicted audio value in the adjacent position (502) in the same subsequence (32). 13. The apparatus of any of the preceding claims, wherein the prediction section (210’) is  configured, in at least one subsequence (31-33), to predict the at least one audio value through a linear prediction. 14. The apparatus of any of the preceding claims, wherein the interpolated version (31’) of the immediately previously predicted parallel line (31) is retrieved through a processing  which reduces the number of discrete positions of the previously predicted parallel line (31) to match the number of discrete positions in the parallel line (32) to be predicted. 15. The apparatus of any of the preceding claims, wherein the interpolated version (31’) of the immediately previously predicted parallel line is retrieved through circular interpolation.  16. The apparatus of any of the preceding claim, configured to choose, based on signal- ling in the bitstream (104), to perform the at least one subsequent prediction sequence (30, 40), by moving along the parallel line and being adjacent to a previously predicted parallel line, such that audio values along a parallel line being processed are predicted based on only  audio values of the adjacent discrete positions in the same subsequence (31, 32, 33). 17. The apparatus of any of the preceding claims, wherein the prediction section includes an adder (220) to add the predicted values (212) and the prediction residual values (222).   18. The apparatus of any of the preceding claims, configured to separate the frequency of the audio signal according to different frequency bands, and to perform a prediction for each frequency band.     19. The apparatus of claim 18, wherein the spatial resolution of the unit sphere (1) is the same for higher-frequency bands and for lower-frequency bands.   20. The apparatus of any of the preceding claims, configured to select the spatial resolu- tion of the unit sphere among a plurality of predefined spatial resolutions, based on signalling in the the selected spatial resolution in the bitstream. 21. The apparatus of any of the preceding claims, configured to convert the predicted au-  dio values (202) in logarithmic domain. 22. The apparatus of any of the preceding claims, wherein the predicted audio values are decibel values.   23. The apparatus of any of the preceding claims, comprising a postprocessor (205) con- figured to redefine the audio signals from differential audio signals to non-differential audio signals by recursively adding each differential audio signal to an adjacent non-differential au- dio signal.   24. The apparatus of claim 23, wherein a non-differential audio value (201) at a particular discrete position is obtained by subtracting the audio value at the particular discrete position from an audio value of an adjacent discrete position according to a predefined order. 25. The apparatus of claim 23 or 24,   configured to perform a prediction for each frequency band, and to compose (205) the frequencies of the audio signal according to different frequency bands, and. 26. The apparatus of any of the preceding claims, wherein the bitstream reader (230) is  configured to read the bitstream (104) using a single-stage decoding, according to which: more frequent predicted audio values are associated with codes with lower length than the less frequent predicted audio values.   27. An apparatus (100) for encoding an audio signal (102), the audio signal having differ- ent audio values according to different directions, the directions being associated with dis- crete positions in a unit sphere (1), the discrete positions in the unit sphere being displaced     according to parallel lines from an equatorial line towards two poles (2, 4), the apparatus comprising: a predictor block (110) configured to perform a plurality of prediction sequences (10, 20, 30) including:   at least one initial prediction sequence (10, 20), along a line of adjacent dis- crete positions (10), by predicting audio values based on the audio values of the im- mediately preceding audio values in the same initial predictions sequence; and at least one subsequent prediction sequence (30, 40), divided among a plural- ity of subsequences (31-33), each subsequence (31-33) moving along a parallel line  and being adjacent to a previously predicted parallel line, and being such that audio values are predicted based on at least: audio values of the adjacent discrete positions in the same subse- quence; and interpolated versions of the audio values of the previously predicted  adjacent parallel line, each interpolated version having the same number of discrete positions of the parallel line, a prediction residual generator (120) configured to compare the predicted values with actual values of the audio signal (102) to generate prediction residual values (122); a bitstream writer (130) configured to write the prediction residual values (122), or a  processed version thereof, in a bitstream (104). 28. The apparatus of claim 27, wherein the at least one initial prediction sequence in- cludes a meridian initial prediction sequence (10) along a meridian line of the unit sphere (1), wherein at least one of the plurality of subsequences (31, 32, 33) starts from a dis-  crete position (31a, 32a, 33a) of the already predicted at least one meridian initial prediction sequence (10). 29. The apparatus of claim 28, wherein the at least one initial prediction sequence in- cludes an equatorial initial prediction sequence (20), along the equatorial line of the unit  sphere (1), to be performed after the meridian initial prediction sequence (10), the equatorial initial prediction sequence (20) starting from a discrete position (20a) of the of the already predicted at least one meridian initial prediction sequence (10). 30. The apparatus of claim 29, wherein a first subsequence (31) of the plurality of subse-  quences is performed along a parallel line adjacent to the equatorial line, and the further sub- sequences (32, 33) of the plurality of subsequences are performed in a succession towards a pole (4).     31. The apparatus of claims 27-30, wherein the predictor block (120) is configured, in at least one initial prediction sequence (10, 20), to predict at least one audio value by linear pre- diction from one single audio value in the preceding adjacent discrete position.   32. The apparatus of claim 31, wherein the linear prediction is, in at least one of the pre- diction sequences or in at least one subsequence, an identity prediction, so that the pre- dicted audio value is the same of the single audio value in the adjacent discrete position.   33. The apparatus of any of claims 27-32, wherein the predictor block (120) is configured, in at least one initial prediction sequence (10, 20), to predict at least one audio value by pre- diction from only one audio value in a first adjacent discrete position and a second audio value in a second discrete position adjacent to the first adjacent discrete position.   34. The apparatus of claim 33, wherein the prediction is linear. 35. The apparatus of claim 33 or 34, wherein the prediction is so that the audio value in the first adjacent discrete position is weighted at least twice as much as the second audio value in the second discrete position adjacent to the first adjacent discrete position.   36. The apparatus of any of claims 27-35, wherein the predictor block (120) is configured, in at least one subsequence (31, 32, 33), to predict at least one audio value (501) based on: the audio value in the adjacent discrete position (502) in the same subsequence (31); and   at least one first interpolated audio value in an adjacent position (503) in the interpo- lated version of the previously predicted parallel line (31). 37. The apparatus of claim 36, wherein the predictor block (120) is configured, in at least one subsequence (31, 32, 33), to predict at least one audio value (501) also based on:   at least one second interpolated audio value (506) in the interpolated version (31’) of the previously predicted parallel line (31) and in a position adjacent to the position (503) of the first interpolated audio value and adjacent to the position (502) adjacent to the discrete position (501) being predicted in the same subsequence (32).   38. The apparatus of claim 37, wherein, in the interpolation, a same weight is given to: the immediately preceding audio value in the adjacent discrete position in the same subsequence;     the first interpolated audio value in the adjacent position in the interpolated version of the previously predicted parallel line; and the at least one second interpolated audio value in the position adjacent to the posi- tion of the first interpolated audio value and adjacent to the audio value in the adjacent dis-  crete position in the same subsequence. 39. The apparatus of any of claims 27-38, wherein the predictor block (120) is configured, in at least one subsequence (31-33), to predict the at least one audio value through a linear prediction.   40. The apparatus of any of claims 27-39, wherein the interpolated version of the immedi- ately previously predicted parallel line is retrieved through a processing which reduces the number of discrete positions of the previously predicted parallel line to match the number of discrete positions in the parallel line to be predicted.   41. The apparatus of any of claims 27-40, wherein the interpolated version of the immedi- ately previously predicted parallel line is retrieved through circular interpolation. 42. The apparatus of any of the preceding claim, configured to select, based on simula-  tions, to perform the at least one subsequent prediction sequence (30, 40) by moving along the parallel line and being adjacent to a previously predicted parallel line, such that audio val- ues along a parallel line being processed are predicted based on only audio values of the ad- jacent discrete positions in the same subsequence (31, 32, 33).   43. The apparatus of any of claims 27-42, configured to separate the frequency of the au- dio signal according to different frequency bands, and to perform a prediction for each fre- quency band. 44. The apparatus of any of claim 43, wherein the spatial resolution of the unit sphere (1)  is the same for higher-frequency bands and for lower-frequency bands. 45. The apparatus of claim 43 or 44, wherein the 46. The apparatus of any of claims 27-45, configured to select the spatial resolution of the  unit sphere among a plurality of predefined spatial resolutions, and to signal the selected spatial resolution in the bitstream.     47. The apparatus of any of claims 27-46, configured to convert, upstream to the predic- tion, the audio values in logarithmic domain. 48. The apparatus of any of claims 27-47, wherein the audio values are decibel values.  49. The apparatus of any of claims 27-48, configured to quantize, upstream to the predic- tion, the audio values. 50. The apparatus of any of claims 27-49, configured to redefine the audio signal (102) as  a differential audio signal, so that the audio values are differential audio values. 51. The apparatus of claim 50, wherein a differential audio value at a particular discrete position is obtained by subtracting the audio value at the particular discrete position from an audio value of an adjacent discrete position.   52. The apparatus of claim 50 or 51, configured to separate the frequency of the audio signal according to different fre- quency bands, and to perform a prediction for each frequency band, wherein a differential audio value at a particular discrete position is obtained by sub-  tracting the audio value at the particular discrete position from an audio value of the same discrete position at a the immediately frequency. 53. The apparatus of any of claims 27-52, wherein the bitstream writer (130) is configured to encode the bitstream using a single-stage encoding, according to which:   more frequent predicted audio values (112), or processed versions (122) thereof, are associated with codes with lower length than the less frequent predicted audio values, or pro- cessed versions thereof.   54. The apparatus of claim 53, configured to group more frequent predicted audio values, or processed versions thereof, together, and less frequent predicted audio values, or pro- cessed versions thereof, together. 55. The apparatus of claim 54 when depending on claim 1026, configured to perform a  selection between using two-stage encoding and single-stage encoding, and to signal the se- lection in the bitstream.     56. The apparatus of claim 55, configured to perform the selection based on the compari- son of the resolution of the unit sphere with a threshold, so that: if the resolution is finer than the threshold, the one-stage encoding is selected, and   if the resolution is coarser than the threshold, the two-stage encoding is se- lected. 57. A method for decoding an audio signal encoded in a bitstream (104), the audio signal having different audio values according to different directions, the directions being associated  with discrete positions in a unit sphere (1), the discrete positions in the unit sphere (1) being displaced according to parallel lines from an equatorial line towards a first pole (2) from the equatorial line towards a second pole (4), the method comprising: reading prediction residual values of the encoded audio signal from the bitstream (104);   decoding the audio signal using the prediction residual values and predicted values (202) from a plurality of prediction sequences (10, 20, 30, 40) including: at least one initial prediction sequence (10, 20), along a line of adjacent dis- crete positions (10), predicting audio values based on the audio values of the immedi- ately preceding audio values in the same initial predictions sequence (10); and  at least one subsequent prediction sequence (30, 40), divided among a plural- ity of subsequences (31, 32, 33), each subsequence (31, 32, 33) moving along a par- allel line and being adjacent to a previously predicted parallel line, and being such that audio values along a parallel line being processed are predicted based on at least:   the audio values of the adjacent discrete positions in the same subse- quence (31, 32, 33); and interpolated versions of the audio values of the adjacent previously predicted parallel line, each interpolated version of the adjacent previously predicted parallel line having the same number of discrete positions of the  parallel line being processed. 58. A non-transitory storing unit storing instruction which, when executed by the processor, cause the processor to perform the method of claim 57.   59. A bitstream (104) representing a compressed description for an audio signal, in which there are encoded:     prediction audio values (122) distributed according to different directions, the direc- tions being associated with discrete positions in a unit sphere (1), the discrete positions in the unit sphere (1) being displaced according to parallel lines from an equatorial line towards a first pole (2) from the equatorial line towards a second pole (4).  
Description:
  AUDIO DIRECTIVITY CODING There are here disclosed apparatuses and methods for encoding and decoding audio signals having directivity. Background Directivity is an important acoustic property of a sound source e.g. in an immersive reproduc- tion environment. Directivity is frequency dependent and may be measured on discrete fre- quencies on an octave or third octave frequency grid. For a given frequency, the directivity is a scalar value defined on the unit sphere. The estimation may be done using a number of microphones distributed evenly on a sphere. The measurements are then post-processed, and then accurately interpolated on a fine or very fine spherical grid. The values are saved into one of the available interoperability file formats, such as SOFA files [1]. These files can be quite large, up to several megabytes. However, for inclusion into a bitstream for transmission, a much more compact representation is needed, where the size is reduced to a dimension from several hundred bytes to at most a few kilobytes, depending on the number of frequency bands and the accuracy desired for re- construction (e.g., reduced accuracy on mobile devices). There are several file formats supporting directivity data, like SOFA [1] and OpenDAFF [2], however their main goals are to be very flexible interchange formats, and also to preserve a significant amount of additional metadata, like how the data was generated, and what equip- ment was used for the measurements. This additional metadata makes it easier to interpret and load the data automatically in research applications, because some file formats allow a large number of heterogeneous data types. Moreover, the spherical grid usually defined is fine or very fine, so that the much simpler approach of using the closest neighbor search can be used instead of 2D interpolation. A system for obtaining more compact representations are pursued. References [1] Piotr Majdak et al., "Spatially Oriented Format for Acoustics: A Data Exchange Format Rep- resenting Head-Related Transfer Functions", 134th Convention of the Audio Engineering So- ciety, convention paper 8880, May 2013. [2] Frank Wefers, "OpenDAFF: A free, open-source software package for directional audio data", DAGA 2010, March 2010.     Summary of the invention There is proposed an apparatus for decoding an audio signal encoded in a bitstream, the audio signal having different audio values according to different directions, the directions being as- sociated with discrete positions in a unit sphere, the discrete positions in the unit sphere being  displaced according to parallel lines from an equatorial line towards a first pole from the equa- torial line towards a second pole, the apparatus comprising: a bitstream reader configured to read prediction residual values of the encoded audio signal from the bitstream; a prediction section configured to obtain the audio signal by prediction and from pre-  diction residual values of the encoded audio signal, the prediction section using a plurality of prediction sequences including: at least one initial prediction sequence, along a line of adjacent discrete posi- tions, predicting audio values based on the audio values of the immediately preceding audio values in the same initial predictions sequence; and   at least one subsequent prediction sequence, divided among a plurality of sub- sequences, each subsequence moving along a parallel line and being adjacent to a previously predicted parallel line, and being such that audio values along a parallel line being processed are predicted based on at least: audio values of the adjacent discrete positions in the same subse-  quence; and interpolated versions of the audio values of the previously predicted ad- jacent parallel line, each interpolated version of the adjacent previously pre- dicted parallel line having the same number of discrete positions of the parallel line being processed.   There is also proposed an apparatus for encoding an audio signal, the audio signal having different audio values according to different directions, the directions being associated with discrete positions in a unit sphere, the discrete positions in the unit sphere being displaced according to parallel lines from an equatorial line towards two poles, the apparatus comprising:  a predictor block configured to perform a plurality of prediction sequences including: at least one initial prediction sequence, along a line of adjacent discrete posi- tions (10), by predicting audio values based on the audio values of the immediately preceding audio values in the same initial predictions sequence; and at least one subsequent prediction sequence, divided among a plurality of sub-  sequences, each subsequence moving along a parallel line and being adjacent to a previously predicted parallel line, and being such that audio values are predicted based on at least:     audio values of the adjacent discrete positions in the same subse- quence; and interpolated versions of the audio values of the previously predicted ad- jacent parallel line, each interpolated version having the same number of dis-  crete positions of the parallel line, a prediction residual generator (120) configured to compare the predicted values with actual values of the audio signal (102) to generate prediction residual values (122); a bitstream writer (130) configured to write the prediction residual values (122), or a processed version thereof, in a bitstream (104).   Description of the figures Fig.1a, 1b, 1c, 1d, 1e, 1f, show examples of encoders. Fig.2a, 2b show examples of decoders. Fig.3 shows how predictions may be performed.   Fig.4 shows an example of decoding method. Fig.5 shows an example of an encoding operation. Figs.6 and 7 shows examples of predictions. Encoder and encoder method   Fig.1f shows an example of an encoder 100. The encoder 100 may perform predictions (e.g. 10, 20, 30, 40, see below) from the audio signals 101 (e.g. in their processed version 102), to obtain predicted values 112. A prediction residual generator 120 may generate prediction re- sidual values 122 of the predicted values 112. An example of operation of the prediction resid- ual generator 120 may be subtracting the predicted values 112 from the audio signal values  102 (e.g., a difference between an adjacent value of the signal 102 and the predicted value 112). The audio signal 102 is here below also called “cover”. The predictor block 110 and the prediction residual generator 120 may constitute a prediction section 110’. The prediction re- sidual values 122 may be inputted into the bitstream writer 130 to generate a bitstream 104. The bitstream writer 130 may include, for example, an entropy coder.   The audio signal 102 may be a preprocessed version of an audio signal 101 (e.g. as outputted by a preprocessor 105). The preprocessor 105 may, for example, perform at least one of: 1) converting the audio signal 101 from a linear scale onto a logarithmic scale (e.g. decibel scale)   2) decomposing the audio signal among different frequency bands     The preprocessor 105 may decompose, in different frequency bands, the audio signal 101, so that the preprocessed audio signal 102 includes a plurality of bandwidths (e.g., from a lowest frequency band to a highest frequency band. The operations at the predictor block 110, the prediction residual generator 120 (or more in general at the prediction section 110’), and/or the  bitstream writer 130 may be repeated for each band. It will be shown that it is also possible to perform a prediction selection to decide which type (e.g. order) of prediction is to be performed (see below).   Fig.1c shows a variant of Fig.1f, in which a differentiation generator 105a generates a differ- entiation residual 105a’ with respect to the preceding frequency band (this cannot be carried out for the first, lowest, frequency band). The preprocessed audio signal 102 may be subjected to differentiation at the differentiation residual generator 105a, to generate differentiation re- siduals 105a. The prediction section 110’ may perform a prediction on the signal 102, to gen-  erate a predicted value 112. Fig.5 shows an example of encoding operation 500. At least some of the steps may be per- formed by the encoder 100, 100a, 100b, 100d, 100e, 100f.   A first encoding operation 502 (first stage) may be a sampling operation, according to which a directional signal is obtained. However, the sampling operation 502 is not to be necessarily performed in the method 500 or by the encoder 100, 100a, 100b, and can be performed, for example, by an external device (and the audio signal 101 may therefore be stored in a storage, or transmitted to the encoder 100, 100a, 100b).   A step 504 comprises a conversion in decibel or another logarithmic scale of the values ob- tained and/or decomposing the audio signal 101 onto different frequency bands. The subse- quent steps 508-514 may be therefore performed for each band, e.g. in logarithmic (e.g. deci- bel) domain.   At step 508, a third stage of differentiating may be performed (e.g., to obtain a differential value for each frequency band). This step may be performed by the differentiation generator 105a, and may be skipped in some examples (e.g. in Fig.1f).   At least one of the steps 504 and 508 (second and third stages) may be performed by the preprocessor 105 or in block 10d, and may provide, for example, a processed version 102 of the audio signal 101 (the prediction may be performed on the processed version). However, it     is not strictly necessary that the steps 504 and 508 are performed by the encoder 100, 100a, 100b, 100d, 100e, 100f: in some examples, the steps 504 and/or 508 may be performed by an external device, and the processed version 102 of the audio signal 101 may be used for the prediction.   At steps 509 and 510, a fourth stage of predicting audio values (e.g., for each frequency band) is performed (e.g. by the predictor block 110). An optional state 509 of selecting the prediction is performed may be performed by simulating different predictions (e.g. different orders of pre- dictions) to be performed, and deciding to use the prediction which, according to the simulation,  provides the best prediction effect. For example, the best prediction effect may be the one which minimizes the prediction residuals and/or the one which minimizes the length of the bitstream 104. At step 510, the prediction is performed (if step 509 has been performed, the prediction is the prediction chosen at step 509, other ways the prediction is predetermined).   At step 512, a prediction residual calculating step may be performed. This can be performed by the prediction residual generator 120 (or more in general by the prediction section 110’). For example, the prediction residual 112 between the audio signal 101 (or its processed ver- sion 102) may be calculated, to be encoded in the bitstream.   At step 514, a fifth stage of bitstream writing may be performed, for example, by the bitstream writer 130. The bitstream writing 514 may be subjected, for example, to a compression, e.g. by substituting the prediction residuals 112 with codes, to minimize the bitlength in the bit- stream 104.   Fig.1a (and its corresponding Fig. 1d, which lacks of the residual generator 105a) shows an encoder 100a (respectively 100d), which can be used instead of the encoder 100 of Fig. 1. The audio signal 101 is pre-processed and/or quantized at pre-processing block 105a. Accord- ingly, a pre-processed audio signal 102 may be obtained. The preprocessed audio signal 102 may be used for prediction at the predictor block 110 (or more in general at the prediction  section 110’), so as to obtain predicted values 112. A differential residual generator 105a (in Figs.1a-1c, but not in Figs.1d-1e) may output differential residuals 105a’. A prediction residual generator 120 can generate prediction residuals 102, by subtracting the results of the predic- tions 112 from the differential residual 105a’. In the examples of Figs.1d-1e, the residual 122 is generated by the difference between the predicted values 112 and the real values 102. The  prediction residuals 122 may be coded in a bitstream writer 130. The bitstream writer 130 may have another reductive probability estimate 132, which estimates the probability of each code.     The probability may be updated as can be seen by the feedback line 133. A range coder 134 may be inserted in codes according to their probabilities into the bitstream 104. Fig.1b (and its corresponding Fig.1e, which lacks of the residual generator 105a) shows an  example similar to the example of Fig.1a of an encoder 100b (respectively 100e). The differ- ence from the example of Fig.1a is in that a predictor selection block 109a (part of the predic- tion section 110’) may perform a prediction 109a’ (which may be carried out at the selected prediction step 509) to decide which order of predictions to use, for example (the orders of predictions are disclosed in Figs.6 and 7, see below).   Different frequency bands may have the same spatial resolution. Decoder and decoding method Figs. 2a and 2b show each an example of a decoder 200a, 200 (the difference between the  two decoders is that decoder 200 of Fig.2a fails to present the integrator 205a, which has the role reversed with respect to the differentiation block 105a of Figs. 1a-1c). The decoder 200 may read a bitstream 104 (e.g., the bitstream as generated by the encoder 100, 100b, 100c, 100e, 100f, 100d). The bitstream reader 230 may provide values 222 as decoded from the bitstream 104. The values 222 may represent prediction residual values 122 of the encoder.  the prediction residual values As explained above, the prediction residual values 222 may be different for different frequency bands. The values 222 may be inputted to a predictor block 210 and to an integrator 205a. The predictor block 210 may predict predicted values 122 in the same way as the predictor block 110 of the encoder, but with a different input.   The output of the prediction residual adder 220 may be values 212 to be predicted. The values of the audio signal to be predicted are submitted to a predictor block 210. Predictive values 212 may be obtained. In general terms, the predictor 210 and the adder 220 (and integrator block 205a, if provided)  are part of a prediction section 210’. The values 202 may then be subjected to a post-processor 205 e.g., by converting from loga- rithmic (decibel) domain onto the linear domain; by composing the different frequency bands.   Fig.4 shows an example of decoding method 800, which may be performed, for example, by the decoder 200. At step 815 there may be an operation of bitstream reading, to read the bitstream 104. At step 810 there may be an operation of predicting (e.g., see below). At step     812 there is an operation of applying the prediction residual, e.g. at the prediction residual adder 220. At step 808 (optional) there may be an operation of inverse differentiation (e.g. summation, integration), e.g. at block 205a. At step 804 there may be an operation of conver- sion from logarithmic domain (decibel) to the linear domain and/or of recomposition of the fre-  quency bands. At step 802 there may be a rendering operation. Different frequency bands may have the same spatial resolution. Coordinates in the unit sphere   Fig. 3 shows an example of the coordinate system which is used to encode an audio signal 101 (102). The audio signal 101 (102) is directional, in the sense that different directions have in principle different audio values (which may be in logarithmic domain, such as a decibel). In order to provide audio values for different directions, a unit sphere 1 is used as a coordinate reference (Fig. 3). The coordinate reference is used to represent the directions of the sound,  imagining that human listener to be in the center of the sphere. Different directions of proveni- ence of sound are associated with different positions in the unit sphere 1. The positions in the unit sphere 1 are discrete, since it is not possible to have a value for each possible direction (which are theoretically in an infinite number). The discrete positions in the unit sphere 1 (which are also called “points” in some parts below) may be displaced according to a coordinate sys-  tem which resembles the geographic coordinate system normally used for the planet Earth (the listener being positioned in the center of the Earth) or for Astronomical coordinates. Here, a north pole 4 (over the listener) and a south pole 2 (below the listener) are defined. An equa- torial line is also present (corresponding to the line 20 in Fig.3), at the height of the listener. The equatorial line is a circumference having, as a diameter, the diameter of the unit sphere  1. A plurality of parallel lines (circumferences) are defined between the equatorial line and each of the two poles. From the equatorial line towards the north pole 4, a plurality of parallel lines are therefore defined with monotonically decreasing diameter, covering the northern hemi- sphere. The same applies for the succession from the equatorial line towards the south pole 2 thorough other parallel lines, covering the southern hemisphere. The equatorial lines are there-  fore associated to different elevations (elevation angles) of the audio signal. It may be under- stood that the parallel lines (including the equatorial line), plus the south pole 2 and the north pole 4, cover the totality of the unit sphere 1. Therefore, each parallel line and each pole is associated to one unique elevation (e.g. the equatorial line being associated to an elevation 0°, the north pole to 90°, the parallel lines in the northern hemisphere having an elevation  between 0° and 90°, the south pole to -90°, and the parallel lines in the southern hemisphere having an elevation between -90° and 0°). Furthermore, at least one meridian may be defined (in Fig.3, one meridian is shown in correspondence of the reference numeral 10). The at least     one meridian may be understood as an arch of circumference which goes from the south pole 2 toward the north pole 4. The at least one meridian may represent an arch (e.g. a semi cir- cumference) of the maximum circumference in the unit sphere 1, from pole to pole. The cir- cumferential extension of the meridian may be the half of the circumferential extension of the  equatorial line. We may considered the north pole 4 and the south pole 2 to be part of the meridian. It is to be noted that at least one meridian is defined, being formed by the discrete positions aligned with each other. However, by virtue of azimuthal misalignments between the discrete positions of adjacent parallel lines, it is not guaranteed that there are other meridians all along the surface of the unit sphere 1. This is not an issue, since it is sufficient that only one  single meridian is identified, formed by discrete positions (taken from different parallels) which are aligned with each other. The discrete positions may be measured, for each parallel line, by azimuthal angles with respect to a reference azimuth 0°. The meridian may be at the reference azimuth 0°, and may therefore may be used as a reference meridian for the measurement of the azimuth. Therefore, each direction may be associated to a parallel or pole, with a particular  elevation, and a meridian (through a particular azimuth). In examples, the coordinates may be expressed, instead of angles, in terms of indexes, such as: 1) An elevation index ei (indicating the parallel of the currently predicted discrete position,  the equator having ei=0 corresponding to the elevation 0°, the south pole and the par- allel lines in the southern hemisphere having indexes with negative numbers, the north pole and the parallel lines in the northern hemisphere having indexes with positive numbers) 2) An azimuth index ai (indicating the azimuthal angle of the currently predicted discrete  position; the reference meridian having ai=0, corresponding to an azimuth = 0°, the subsequent discrete positions being progressively numbered) 3) So that the value (sometimes expressed as cover[ei][ai]) indicates the predicted value in the discrete position, once predicted.   Preprocessing and differentiating at the encoder Some preprocessing (e.g.504) and differentiating (e.g.508) may be performed onto the audio signal 101, to obtain a processed versions 102, e.g. through the preprocessor 105, and/or to obtain a differentiation residual version 105a’, e.g. through the differentiation residual genera- tor 105a.   For example, the audio signal 101 may be decomposed (at 504) among the different frequency bands. Each prediction process (e.g. at 510) may be performed, subsequently, for a specific     frequency band. Therefore the encoded bitstream 104 may haven, encoded therein, different prediction residuals for different frequency bands. Therefore, in some examples, the discus- sion below regarding the predictions (prediction sequences, prediction subsequences sphere unit, and so on) is valid for each frequency band, and may be repeated for the other frequency  bands. Further, the audio values may be converted (e.g. at 504) onto a logarithmic scale, such as in the decibel domain. It is possible to select between a coarse quantization step (e.g., 1.25 dB to 6 dB) for the elevation and/or the azimuth. The audio values along the different positions of the unit sphere 1 may be subjected to differ-  entiation. For example, a differential audio value 105a’ at a particular discrete position of the unit sphere 1 may be obtained by subtracting the audio value at the particular discrete position for an audio value of an audio adjacent discrete position (which may be an already differenti- ated discrete position). A predetermined path may be performed for differentiating the different audio values. For example, it may be that a particular first point is not provided differentially  (e.g., the south pole) while all the remaining differentiations may be performed along a prede- fined path. In examples, sequences may be defined which may be the same sequences for the prediction. In some examples, it is possible to separate the frequency of the audio signal according to different frequency bands, and to perform a prediction for each frequency band.   It is to be noted that the predictor block 110 is in general inputted by the preprocessed audio signal 102, and not by the differentiation residual 105a’. Subsequently, the prediction residual generator 120 will generate the prediction residual values 122. The techniques above may be combined with each other. For a first frequency band (e.g., the  lowest frequency band) may be obtained by differentiating from adjacent discrete positions of the same frequency, while for the remaining frequencies (e.g., higher frequencies) it is possible to perform the differentiation from the immediately preceding adjacent frequency band. Prediction at the encoder and at the decoder   A description of the prediction as at the predictor block 110 of the encoder and of the predictor block 210 of the decoder, or of the prediction as carried out at step 510 is now discussed. It is noted that, when the prediction is performed at the encoder, the input is the preprocessed audio signal 102.   A prediction of the audio values along the entire unit sphere 1 may be performed according to a plurality of prediction sequences. In examples, there may be performed at least one initial     prediction sequence and at least one subsequent prediction sequence. The at least one initial prediction sequence (which can be embodied by two initial prediction sequences 10, 20) may extend along a line (e.g. a meridian) of adjacent discrete positions, by predicting audio values based on the audio values of the immediately preceding audio values in the same initial pre-  diction sequence. For example, there may be at least a first sequence 10 (which may be a meridian initial prediction sequence) which extends from the south pole 2 towards the north pole 4, along the at least one meridian. Prediction values may therefore be propagated along the reference meridian line (azimuth = 0°). It will be shown that, at the south pole 2 (starting position of the first sequence) a non-predicted value may be inserted, but the subsequent pre-  diction values are propagated through the meridian towards the north pole 4. A second initial prediction sequence 20 may be defined along the equatorial line. Here, the line of adjacent discrete positions is formed by the equatorial line (equatorial circumference) and the audio values are predicted according to a predefined circumferential direction, e.g., from  the minimum positive azimuth (closest to 0°) towards the maximum azimuth (closest to 360°). Notably, the second sequence 20 starts with a value at the intersection of the predicted merid- ian line (predicted at the first sequence 10) and the equatorial line. That position is the starting position 20a of the second sequence 20 (and may be the value with azimuth 0° and elevation 0°). After the second prediction sequence 20, therefore, at least one discrete position for the  at least one meridian line (e.g. reference meridian) and at least one discrete position for each parallel line is performed. At least one subsequent prediction sequence 30 may include, for example, a third sequence 30 for predicting discrete positions in the northern hemisphere, between the equatorial line and  the north pole 4. A fourth sequence 40 may predict positions in the southern hemisphere, between the equatorial line and the south pole 2 (the already predicted positions in the merid- ian line as predicted in the second sequence 20 are not generally not predicted in the subse- quent prediction sequences 30, 40).   Each of the subsequent prediction sequences (third prediction sequence 30, fourth prediction sequence 40) may be in turn subdivided into a plurality of subsequences. Each subsequence may move along one parallel line adjacent to a previously predicted parallel line. For example, Fig.2 shows a first subsequence 31, a second subsequence 32 and other subsequences 33 of the third sequence 30 in the northern hemisphere. As can be seen, each of the subse-  quences 31, 32, 33 moves along one parallel line and has a circumferential length smaller than that of the preceding parallel line (i.e. the closer the subsequence is to the north pole, the less the number of discrete positions in the parallel, the less audio values are to be predicted). The     first subsequence 31 is performed before the second subsequent 32, which in turn is performed before the immediately adjacent subsequence of the third sequence 30, moving towards the north pole 4 from the equatorial line. Each subsequence (31, 32, 33) is associated with a par- ticular elevation (since it only predicts positions in one single parallel line), and moves along  increasing azimuthal angles. Each subsequence (31, 32, 33) is so that an audio value is pre- dicted based on at least the audio value of the discrete position immediately before in the same subsequence (that audio values shall already have been predicted) and audio values of the adjacent immediately previous predicted parallel line. Each subsequence 31, 32, 33 starts from a starting position (31a, 32a, 33a), and propagates along a predefined circumferential direction  (e.g., from the azimuthal angle closest to 0 towards the azimuthal angle closest to 360°). The starting position (31a, 32a, 33a) may be in the reference meridian line, which has been pre- dicted at the meridian initial prediction sequence 10. By virtue of the fact that the equatorial line has already been predicted in the second sequence 20, the first subsequence 31 of the third sequence 30 may be predicted also by relying on the already predicted audio values in  the audio discrete positions at the equatorial line. For this reason, the audio values predicted in the second sequence 20 are used for predicting the first subsequence 31 of the third se- quence 30. Therefore, the prediction carried out in the first subsequence 31 of the third se- quence 30 is different from the second sequence 20 at the equatorial initial prediction se- quence: in the second prediction sequence 20 the prediction has only been based on audio  values in the equatorial line, while the predictions at the first subsequences 31 may be based not only on already predicted audio values in the same parallel line, but also by previously predicting audio values in the equatorial line. Since the equatorial line (circumference) is longer than the parallel line on which the first sub-  sequence 31 is processed, there is not an exact correspondence between the discrete posi- tions in the parallel line in which the first subsequence 31 is carried out and the discrete posi- tions in the equatorial line (i.e. the discrete positions of the equatorial line and of the parallel line are misaligned with each other). However, it has been understood that it is possible to interpolate the audio values of the equatorial line to reach an interpolated version of the equa-  torial line, with the same number of discrete positions of the parallel line. The same is repeated, parallel line by parallel line, for the remaining subsequences of the same hemisphere. In some examples: 1) Each subsequence (31, 32, 33) of the third subsequence 30 may start from a starting  position (31a, 32a, 33a) in the reference meridian line, which has already been pre- dicted in the meridian initial prediction sequence 10;     2) After the already-predicted starting position (31a, 32a, 33a), each determined discrete position of each subsequence (31, 32, 33), is predicted by relying on: a. the previously predicted immediately preceding discrete position in the same subsequence   b. (in some cases, also from the already predicted second immediately audio value in the same determined discrete position, which is adjacent to the imme- diately preceding discrete position, but is not adjacent to the determined dis- crete position) c. an adjacent interpolated version of audio values in the immediately preceding  parallel line d. (in some cases, also from the already predicted audio value in the same deter- mined discrete position, but obtained at a previous frequency band). While the second sequence 30 moves from the equatorial line towards the north pole 4 prop-  agating audio values in the northern hemisphere, the fourth sequence 40 moves from the equatorial line towards the south pole 2 propagating audio values in the southern hemisphere. Apart from that, the third and the fourth sequences 30 and 40 are analogous with each other. Different orders of prediction may be defined. Figs.6 and 7 show some examples thereof. With  reference to the first sequence 10 and the second sequence 20, there may be defined a first order (according to which a specific discrete position is predicted from the already predicted audio value at the position which immediately precedes, and is adjacent to, the currently pre- dicted discrete position). According to a second order, a specific discrete position is predicted from both:   1) a first already predicted audio value at the position which immediately precedes, and is adjacent to, the currently predicted discrete position; 2) a second already predicted audio value at the position which immediately precedes, and is adjacent to, discrete position of the first already predicted audio value.   An example is provided in Fig.6. In section a) of Fig.6 the first order for the first sequence 10 and the second sequence 20 is illustrated: 1) The first sequence 10 moves along the reference meridian with azimuth index ai=0 and elevation index moving from pole to pole: a. The audio value to be predicted at the discrete position 601 (having elevation  index ei) is obtained from only: i. The already predicted audio value at the adjacent position 602 having elevation index ei-1     2) The second sequence 20 moves along the equator, with azimuth moving from the start- ing point 20a (ei=0, ai=0) and elevation index moving along the equator: a. The audio value to be predicted at the discrete position 701 (having elevation index ei=0 and azimuth index ai) is obtained from only:   i. The already predicted value audio value at the adjacent position 702 having azimuth index ai-1. Let us now examine the first and second sequences 10 and 20 according to the second order, illustrated section b) of Fig.6:   1) The first sequence 10 moves along the reference meridian with azimuth index ai=0 and elevation index ei moving from pole to pole: a. The audio value to be predicted at the discrete position 601 (having elevation index ei and azimuth index ai=0) is predicted from only both: i. The already predicted audio value at the first position 602 (having ele-  vation index ei-1 and azimuth index ai=0) adjacent to the position 601 currently processed; and ii. The already predicted audio value at the second position 605 (having elevation index ei-2 and azimuth index ai=0) adjacent to the first position 602.   b. The prediction value may be an identity prediction, i.e. pred_v[ei+1] = cover[ei - 1][0] (where “cover” refers to the value of the audio signal 101 or 102 before prediction); 2) The second sequence 20 moves along the equator, with azimuth a1 moving from the starting point 20a (ei=0, ai=0) and elevation index ei=0:   a. The audio value to be predicted at the discrete position 701 (having elevation index ei=0 and azimuth index ai) is predicted from only both: i. The already predicted value audio value at the first position 702 (having elevation index ei=0 and azimuth index ai-1) adjacent to the position 601 currently processed; and   ii. The already predicted value audio value at the adjacent position 705 (having elevation index ei=0 and azimuth index ai-2) adjacent to the second position.ended b. The prediction may be so that the predicted value pred_v is obtained as pred_v[ei][0] = 2 * cover[ei - 1][0] - cover[ei - 2][0].       Let us now examine the third and fourth sequences 30 and 40 in Fig.7 (reference is made to the third sequence, and in particular to the second subsequence 32 performed after the second subsequence 31.   For example, at least one of the following pre-defined orders may be defined (the symbols and reference numerals are completely generic, only for the sake of understanding): 1) A first order (order 1, shown in section a) of Fig.7) according to which the audio value in the position 501 (elevation ei, azimuth ai) is predicted from: a. the previously predicted audio value in the immediately adjacent discrete posi-  tion 502 (ei, ai-1) in the same subsequence 32; and b. the interpolated audio value in the adjacent position 503 in the interpolated ver- sion 31’ (ei, ai-1) of the previously predicted parallel line 31; c. e.g. according to the formula pred_v = cover[ei - 1][0] (e.g. identity prediction); 2) a second order (order 2, shown in section b) of Fig.7) (using the immediately previous  elevation and the two immediately previous azimuths) according to which the audio value to be predicted in the position 501 (in the subsequence 32) is obtained from: a. the predicted audio value in the adjacent discrete position 502 in the same sub- sequence 32; b. one first interpolated audio value in the position 505 adjacent to the position 502  in the same subsequence; c. e.g. according to the formula pred_v = 2 * cover[ei - 1][0] - cover[ei - 2][0]; 3) a third order (order 3, shown in section c) of Fig.7) (using both the immediately previous elevation value, the immediately previous azimuth value) according to which the audio value to be predicted in the position 501 is obtained from:   a. the previously predicted audio value in the adjacent discrete position 502 in the same subsequence 32; and b. the interpolated audio value in the adjacent position 503 in the interpolated ver- sion 31’ of the previously predicted parallel line 31’; c. one second interpolated audio value in the position 506 adjacent to the position  503 of the first interpolated audio value and adjacent to the audio value in the adjacent discrete position 502 in the same subsequence 32 of the value 501 to be predicted; d. e.g. according to the formula the predicted value at position 502, is the predicted interpolated value   at 503, and is the predicted interpolated value at 506. 4) a fourth order (order 4, shown in section d) of Fig.7) (using the immediately previous elevation value, two immediately previous azimuth values (ai-1 and ai-2)) according to     which the audio value to be predicted in the position 501 (in the subsequence 32) is obtained from: a. the predicted audio value in the adjacent position 502 in the same subsequence 32;   b. one first interpolated audio value in the adjacent position 505 adjacent to the position 502 in the same subsequence 32; c. one first interpolated audio value in the adjacent position 503 in the interpolated version 31’ of the previously predicted parallel line 31; d. one second interpolated audio value in the position 506 adjacent to the position  503 of the first interpolated audio value and also adjacent to the position 502 adjacent in the same subsequence e. e.g. according to the formula where is the predicted value at position 502, is the predicted value at position 505, is the predicted interpolated value at 503, and is   the predicted interpolated value at 506 Even if reference has been made to subsequence 32, this is general for the second sequence 30 and the fourth sequence 40.   The type of ordering may be signalled in the bitstream 104. The decoder will adopt the same prediction signalled in the bitstream. The prediction orders discussed below may be selectively chosen (e.g., by block 109a and or at step 509) for each prediction sequence (e.g. one selection for the initial prediction se-  quences 10 and 20, and one selection for the subsequent prediction sequences 30 and 40). For example, it may be signalled that the first and second initial sequences 10 and 20 are to be performed with order 1 or with order 2, and there may be signalled the the third and fourth sequences 30 and 40 are to be performed with order selected between 1, 2, 3, and 4. The decoder will read the signalling and will perform the prediction according to the selected or-  der(s). It is noted that the orders 1 and 2 (Fig.7, sections a) and b)) do not require the prediction to be also based on the preceding parallel. The prediction order 5 may be the one illustrated in Figs.1a-1c and 2a. Basically, the encoder may select (e.g., at block 109a and or at step 509), e.g. based on sim-  ulations, to perform the at least one subsequent prediction sequence (30, 40) by moving along the parallel line and being adjacent to a previously predicted parallel line, such that audio val- ues along a parallel line being processed are predicted based on only audio values of the     adjacent discrete positions in the same subsequence (31, 32, 33). The decoder will follow the encoder’s selection based on the signalling the bitstream 104, and will perform the prediction as requested, e.g. according to the order selected.   It is noted that, after the prediction carried out by the predictor block 210, the predicted values 212 may be added (at adder 220) with the prediction residual values 222, so as to obtain signal 202. With reference to the decoder 200 or 200a, a prediction section 210’ may be considered to  include the predictor 210 and an adder 200, so as to add the residual value (or the integrated signal 105a’ generated by the integrator 205a) to the predicted value 212. The obtained value may then be postprocessed. With reference to the above, it is noted that the first sequence 10 may start (e.g. at the south  pole) with a value obtained from the bitstream (e.g. the value of at the south pole). In the encoder and/or in the decoder, this value may be non-residual. Residual Generator and Bitstream Writer at the encoder With reference to Figs.1d-1f, a subtraction may be performed by the prediction residual gen-  erator 120 by subtracting, from the signal 102, the predicted values 112, to generate prediction residual values 122. With reference to Figs.1a-1c, a subtraction may be performed by the prediction residual gen- erator 120 by subtracting, from the signal 105a’, the predicted values 112, to generate predic-  tion residual values 122. A bitstream writer may write the prediction residual values 122 onto the bitstream 104. The bitstream writer may, in some cases, encode the bitstream 104 by using a single-stage encod- ing. In examples, more frequent predicted audio values (e.g. 112), or processed versions  thereof (e.g.122), are associated with codes with lower length than the less frequent predicted audio values, or processed versions thereof. In some cases, it is possible to perform a two-stage encoding.   Bitstream Reader at the decoder The reading to be performed by the bitstream reader 230 substantially follows the rules de- scribed for encoding the bitstream 104, which are therefore not repeated in detail.     The bitstream reader 230 may, in some cases, read the bitstream 104 using a single-stage decoding. In examples, more frequent predicted audio values (e.g.112), or processed versions thereof (e.g.122), are associated with codes with lower length than the less frequent predicted  audio values, or processed versions thereof. In some cases, it is possible to perform a two-stage decoding. Postprocessing and rendering at the decoder   Some postprocessing may be performed onto the audio signal 201 or 202 to obtain a pro- cessed versions 201 of the audio signal to be rendered. A postprocessor 205 may be used. For example, the audio signal 201 may be recomposed recomposing the frequency bands. Further, the audio values may be reconverted from the logarithmic scale, such as in the decibel  domain, to a linear domain. The audio values along the different positions of the unit sphere 1 (which may be defined as a differential values) may be recomposed, e.g. by adding the value of the immediately preceding adjacent discrete position (apart from a first value, e.g. at the south pole, which may be not  differential). An predefined ordering is defined, which is the same taken by the preprocessor 205 of the encoder 200 (the ordering may be the same as the one taken for predicting, e.g., at first, the first sequence 10, then the second sequence 20, then the third sequence 30, and finally the fourth sequence 40).   Example of decoding It is here in concrete how to carry out the present examples, in particular from the point of view of the decoder 200. Directivity is used to auralize the Directivity property of Audio Elements. To do this, the Di- rectivity tool is comprised of two components: the coding of the Directivity data, and the ren-  dering of the Directivity data. The Directivity is represented as a number of Covers, where each Cover is arithmetically coded. The rendering of the Directivity is done by checking to see which RIs use Directivity, taking the filter gain coefficients from the Directivity, and applying an EQ to the metadata of the RI. Here below, when it is referred to “points”, it is referred to the “discrete positions” defined  above. Data elements and variables: covers This array holds all decoded directivity Covers     dbStepIdx This is the index of the decibel quantization range. dbStep This number is the decibel step that the values have been quan- tized to. intPer90 This integer is the interval of azimuth points per 90 degrees  around the equator of the Cover. elCnt This integer is the number of elevation points on the Cover. aziCntPerEl Each element in this array represents the number of azimuth points per elevation point. coverWidth This number is the maximum azimuth points around the equator.  minPosVal This number is the minimum possible decibel value that could be coded. maxPosVal This number is the maximum possible decibel value that could coded. minVal This number is the lowest decibel value that is actually present  in the coded data. maxVal This number is the lowest decibel value that is actually present in the coded data. valAlphabetSize This is the number of symbols in the alphabet for decoding. predictionOrder This number represents the prediction order for this Cover. This  influences how the Cover is reconstructed using the previous re- sidual data, if present. cover This 2d matrix represents the Cover for a given frequency band. The first index is the elevation, and the second index is the azi- muth. The value is the dequantized decibel value for that azimuth  and elevation. Note, the length of the azimuth points is variant. coverResiduals This 2d matrix represents the residual compression data for the Cover. It mirrors the same data structure as cover, however the value is the residual data instead of the decibel value itself. freq This is the final dequantized frequency value in Hertz.   freqIdx This is the index of the frequency that needs to be dequantized to retrieve the original value. freq1oIdxMin This is the minimum possible index in the octave quantization mode. freq1oIdxMax This is the maximum possible index in the octave quantization  mode. freq3oIdxMin This is the minimum possible index in the third octave quantiza- tion mode.     freq3oIdxMax This is the maximum possible index in the third octave quantiza- tion mode. freq6oIdxMin This is the minimum possible index in the sixth octave quantiza- tion mode.   freq6oIdxMax This is the maximum possible index in the sixth octave quantiza- tion mode. Definitions: Sphere Grid A quasi-uniform grid of points upon the surface a unit sphere. Where v is the current Cover, e i is the elevation index, and a i is  the azimuth index. Where is the current Cover’s fixed linear predictor, e i is the el- evation index, and a i is the azimuth index. Where is the current Cover that has been circularly interpo- lated, and where e i is the elevation index, and where a i is the  azimuth index. Where n is the number of azimuth points in the Sphere Grid per elevation, and where e i is the elevation index. Decoding process Once the directivity payload is received by the renderer, before the Directivity Stage initializa-  tion, the decoding process begins. Each Cover has an associated frequency; di- recFreqQuantType indicates how the frequency is decoded, i.e. determining the width of the frequency band, which is done in readQuantFreq(). The variable dbStep determines the quan- tized step sizes for the gain coefficients; its value lies within a range between 0.5 and 3.0 with increments of 0.5. intPer90 is the number of azimuth points around a quadrant of the equator  and is the key variable used for the Sphere Grid generation (This integer is the number of elevation points on the Cover). direcUseRawBasline determines which of two decoding modes is chosen for the gain coefficients. The available decoding modes either the “Baseline Mode” or the “Optimized Mode”. The baseline mode simply codes each decibel index arith- metically using a uniform probability distribution. Whereas, the optimized mode uses residual  compression in conjunction with an adaptive probability estimator alongside five different pre- diction orders. Finally, after the completion of decoding, the directivities are passed to the Scene State where other Scene Objects can refer to them. Sphere grid generation The Sphere Grid determines the spatial resolution of a Cover, which could be different across  Covers. The Sphere Grid of the Cover has a number of different points. Across the equator, there are at least 4 points, possibly more depending on the intPer90 value. At the north and south poles, there is exactly one point. At different elevations, the number of points is equal or     less than the number of points across the equator, and is decreasing as the elevation ap- proaches the poles. Upon each elevation layer, the first azimuth point is always 0°, creating a line of evenly spaced points from the south pole, to the equator, and, finally, to the north pole. This property is not guaranteed for the rest of the azimuth points across different elevations.  The following is a description in pseudocode format: generateSphereGrid(intPer90) { piOver180 = acos(-1) / 180; // 1 degree degStep = 90 / intPer90; // intPer90 is the number of azimuth points around a quad-  rant of the equator elCnt = 2 * intPer90 + 1; // (integer) number of elevation points on the Cover azCnt[elCnt] = { 0 }; coverWidth = 4 * intPer90; // maximum number of azimuth points (at equator)   for (ei = 0; ei < elCnt; ei++) { elAng = (ei - intPer90) * degStep; elLen = cos(elAng * piOver180); azCnt[ei] = max(round(elLen * 4 * intPer90), 1);   } return elCnt, aziCntPerEl, coverWidth }   Baseline mode The baseline mode uses a range decoder with a uniform probability distribution to decode quantized decibel values. The maximum and minimum possible values (i.e., maxPosVal, minPosVal) that can be stored are -128.0 and 127, respectively. The alphabet size can be found using dbStep and the actual maximum and minimum possible value (maxVal, minVal).  After decoding the decibel, a simple rescaling is done to find the actual dB value. This can be seen in Table . Optimized mode The optimized mode decoding uses a sequential prediction scheme, which traverses the Cover in a special order. This scheme is determined by predictionOrder, where its value can be an  integer between 1 and 5 inclusive. predictionOrder dictates which linear prediction order (1 or 2) to use. When predictionOrder == 1 || predictionOrder == 3, the linear prediction order is 1     and when predictionOrder == 2 || predictionOrder == 4, the linear prediction order is 2. The traversal is composed of four different sequences: The first sequence goes vertically, from the value at the South Pole to the North Pole, all with azimuth 0. The first value of the sequence (coverResiduals[0][0]), at the South Pole is not  predicted. This value serves as the basis in which the rest of the values are predicted from. This prediction uses either a linear prediction of order 1 or 2. Using a prediction order of 1 uses the previous elevation value, where a prediction order of 2 uses the two previous elevation values as a basis for prediction. The second sequence goes horizontally, at the equator, from the value next to the one at  azimuth 0 degrees (which was already predicted during the first sequence), until the value previous to it at azimuth close to 360 degrees. The values are predicted from previous values also using linear prediction of order 1 or 2. Similarly to sequence one, using a prediction order of 1 uses the previous azimuth value, where using a prediction of 2 uses the previous two azimuth values as a basis prediction.   The third sequence goes horizontally, in order for each elevation, starting from the one next to the equator towards the North Pole until the one previous to the North Pole. Each horizontal subsequence starts from the value next to the one at azimuth 0 degrees (which was already predicted during the first sequence), until the value previous to it at azimuth close to 360 de- grees. When (predictionOrder == 1 || predictionOrder == 2 || predictionOrder == 3 || predic-  tionOrder == 4) the values are predicted from previous values using either linear prediction of order 1 or 2, as explained above. Furthermore, when (predictionOrder == 3 || predictionOrder == 4), in addition to the previous values on the current Cover, the values are also used from the previously predicted elevation. Since the number of points upon the Sphere Grid n ei-1 at the previously predicted elevation e i-1 is different from the number of points n ei at the currently  predicted elevation e i , the number of azimuth points do not match across the elevations in the Sphere Grid. Therefore, the points v ei-1 ,ai at the previously predicted elevation e i-1 are circu- larly interpolated to produce n ei new points, where a i is azimuth index and v is a 2d vector representing the Cover. For example, if the number of points at the current elevation is 24, and the number of points at the previous elevation is 27, they are circularly interpolated to produce  24 new points. Interpolation is linear to preserve monotonicity. For a given point value to be predicted v ei,ai , the previous point value horizontally v ei,ai-1 and the corresponding previous point value and current point value on the circularly interpolated new points (which are derived from the previous elevation level) are used as regressors to create a pre- dictor with 3 linear prediction coefficients. A fixed linear predictor is used, i.e.   which predicts perfect 2D linear slopes in dB domain.     The fourth sequence also goes horizontally, in order for each elevation, exactly like the third sequence, however starting from the one next to the equator towards the South Pole until the one previous to the South Pole. The following pseudocode describes the aforementioned algorithm:   unpredict(predOrder, coverRes, prevCover) { if (predOrder == 5) { for (ei = 0; ei < elCnt; ei++) { for (ai = 0; ai < aziCntPerEl[ei]; ai++) {   i = ei * coverWidth + ai; cover[ei][ai] = coverRes[ei][ai] + prevCover[ei][ai]; } }   return; } // copy the original value at the South pole, // coverRes[0], which is not predicted   cover[0] = coverRes[0]; // predict vertically, from the one after the // South pole to the North pole, at azimuth 0 FIRST SEQUENCE for (int ei = 1; ei < elCnt; ++ei) {   if ((predOrder == 1) || (ei == 1) || (predOrder == 3)) { pred_v = cover[ei - 1][0]; } else if ((predOrder == 2) || (predOrder == 4)) { pred_v = 2 * cover[ei - 1][0] - cover[ei - 2][0];   } cover[ei][0] = coverRes[ei][0] + pred_v; // always use true order 1 or true order 2 horizontal prediction at the equator   if (((predOrder == 3) || (predOrder == 4)) && (ei != intPer90)) { continue; }     // predict horizontally, from azimuth 0 to the maximum azimuth (SECOND SEQUENCE) for (int ai = 1; ai < aziCntPerEl[ei]; ++ai) { if ((predOrder == 1) || (ai == 1) || (predOrder == 3)) {   pred_h = cover[ei][ai - 1]; } else if ((predOrder == 2) || (predOrder == 4)) { pred_h = 2 * cover[ei][ai - 1] - cover[ei][ai - 2]; }   cover[ei][ai] = coverRes[ei][ai] + pred_h; } }   if ((predOrder == 3) || (predOrder == 4)) { (THIRD SE- QUENCE) cResample[coverWidth] = { 0 }; // predict horizontally for each elevation, // from the one following the equator to the South pole   for (int ei = intPer90 - 1; ei >= 1; --ei) { input = cover; start = (ei + 1) * coverWidth; count = aziCntPerEl[ei + 1]; newCount = aziCntPerEl[ei];   output = cResample; circularResample(input, start, count, newCount, output); for (int ai = 1; ai < aziCntPerEl[ei]; ++ai) { pred_h = cover[ei][ai - 1] + (cResample[ai] - cResample[ai - 1]);   cover[ei][ai] = coverRes[ei][ai] + pred_h; } } // predict horizontally for each elevation,   // from the one following the equator to the North pole (FOURTH SEQUENCE) for (int ei = intPer90 + 1; ei < elCnt - 1; ++ei) { input = cover;     start = (ei - 1) * coverWidth; count = aziCntPerEl[ei - 1]; newCount = aziCntPerEl[ei]; output = cResample;   circularResample(input, start, count, newCount, output); for (int ai = 1; ai < aziCntPerEl [ei]; ++ai) { pred_h = cover[ei][ai - 1] + (cResample[ai] - cResample[ai - 1]); cover[ei][ai] = coverRes[ei][ai] + pred_h;   } } } }   Stage description The stage iterates over all RIs in the update thread, checks whether Directivity can be applied, and, if so, the stage takes the relative position between the Listener and the RI, and queries the Directivity for filter coefficients. Finally, the stage applies these filter gain coefficients to the central EQ metadata field of the RI, to be finally auralized in EQ stage.   Update thread processing Directivity is applied to all RIs with a value of true in the data elements of ob- jectSourceHasDirectivity and loudspeakerHasDirectivity (and by secondary RIs derived from such RIs in the Early Reflections and Diffraction stages) by using the central EQ metadata field that accumulates all EQ effects before they are applied to the audio signals by the EQ  stage. The listener’s relative position in polar coordinates to the RI is needed to query the Directivity. This can be done, e.g. using Cartesian to Polar coordinate conversion, homoge- nous matrix transforms, or quaternions. In the case of secondary RIs, their relative position for their parents must be used to correctly auralize the Directivity. For consistent frequency reso- lution, the directivity data is linearly interpolated to match the EQ bands of the metadata field,  which can differ from the bitstream representation, depending on the bitstream compression configuration. For each frequency band, directiveness (available from ob- jectSourceDirectiveness or loudspeakerDirectiveness) is applied according to the formula C eq = exp( d log m) , where d is the directiveness value and m is the interpolated magnitude derived from the Covers adjacent to the requested frequency band, and C eq is the coefficient  used for the EQ.     Audio thread processing The directivity stage has no additional processing in the audio thread. The application of the filter coefficients is done in the EQ stage. A bitstream syntax   In environments that require byte alignment, MPEG-I Immersive audio configuration elements or payload elements that are not an integer number of bytes in length are padded at the end to achieve an integer byte count. This is indicated by the function ByteAlign() . Renderer payloads syntax (to be inserted in the bitstream 104)   Table 1 — Syntax of payloadDirectivity()  

    Table 2 — Syntax of coverSet()

    Table 3— Syntax of directivityCover()     Table 4 — Syntax of readQuantFrequency()     Table 5 — Syntax of rawCover()

    Table 6 — Syntax of optimizedCover()     Discussion The new approach is composed of five main stages. The first stage generates a quasi-uniform covering of the unit sphere, using an encoder selectable density. The second stage converts the values to the dB scale and quantizes them, using an encoder selectable precision. The  third stage is used to remove possible redundancy between consecutive frequencies, by con- verting the values to differences relative to the previous frequency, useful especially at lower frequencies and when using relatively coarse sphere covering. The fourth stage is a sequential prediction scheme, which traverses the sphere covering in a special order. The fifth stage is entropy coding of the prediction residuals, using an adaptive estimator of its distribution and  optimally coding it using a range encoder. A first stage of the new approach may be to sample quasi-uniformly the unit sphere 1 using a number of points (discrete positions), using further interpolation over the fine or very fine spher- ical grid available in the directivity file. The quasi-uniform sphere covering, using an encoder  selectable density, has a number of desirable properties: there is always elevation 0 present (the equator), at every elevation level present there is a sphere point at azimuth 0, and both determining the closest sphere point and performing bilinear interpolation can be done in con- stant time for a given arbitrary elevation and azimuth. The parameter controlling the density of the sphere covering is the angle between two consecutive points on the equator, the degree  step. Because of the constraints implied by the desirable properties, the degree step must be a divisor of 90 degrees. The coarsest sphere covering, with a degree step of 90 degrees, corresponds to a total of 6 sphere points, 2 points at the poles and 4 points on the equator. On the other end, a degree step of 2 degrees corresponds to a total of 10318 sphere points, and 180 points on the equator. This sphere covering is very similar to the one used for the quanti-  zation of azimuth and elevation for DirAC direction metadata in IVAS, except that it is less constrained. In comparison, there is no requirement that the number of points at every eleva- tion level other than at the equator is a multiple of 4, which was chosen in DirAC in order to ensure that there are always sphere points at azimuths of 90, 180, and 270 degrees. In Figs. 1a-1f this first stage is not shown, but it provides the audio signal 101.   A second stage may convert the linear domain values, which are positive, but are not limited to a maximum value of 1, into dB domain. Depending on the normalization convention chosen for the directivity (i.e., an average value of 1 on the sphere, a value 1 on the equator at azimuth 0, etc.), values can be larger than 1. The quantization is done linearly in the dB domain using  an encoder selectable precision, typically using a quantization step size from very fine at 0.25     dB to very coarse at 6 dB. In Figs. 1a-1f this second stage can be performed by the prepro- cessor 105 of the encoder 100, and its reverse function is performed by the postprocessor 205 of the decoder 200. A third stage (differentiation) may be used to remove possible redundancy between consecu-  tive frequencies. This is done by converting the values on the sphere covering for the current frequency to differences relative to values on the sphere covering of the previous frequency. This approach is especially advantageous at lower frequencies, where the variations across frequency for a given elevation and azimuth tend to be smaller than at high frequencies. Addi- tionally, when using quite coarse sphere coverings, e.g., with a degree step of 22.5 degrees  or more, there is less correlation available between neighboring consecutive sphere points, when compared to the correlation across consecutive frequencies. In Figs. 1a-1f this third stage can be performed by the preprocessor 105 of the encoder 100, and its reverse function is performed by the postprocessor 205 of the decoder 200.   A fourth stage is a sequential prediction scheme, which traverses the sphere covering for one frequency in a special order. This order was chosen to increase the predictability of the values, based on the neighborhood of previously predicted values. It is composed of 4 different se- quences 10, 20, 30, 40. The first sequence 10 goes vertically, e.g. from the value at the South Pole to the North Pole, all with azimuth 0°. The first value of the sequence, at the South Pole  2 is not predicted, and the rest are predicted from the previous values using linear prediction of order 1 or 2. The second sequence 20 goes horizontally, at the equator, from the value next to the one at azimuth 0 degrees (which was already predicted during the first sequence), until the value previous to it at azimuth close to 360 degrees. The values are predicted from previ- ous values also using linear prediction of order 1 or 2. One option is to use fixed linear predic-  tion coefficients, with the encoder selecting the best prediction order, the one producing the smallest entropy of the prediction error (prediction residual). The third sequence 30 goes horizontally, in order for each elevation, starting from the one next to the equator towards the North Pole until the one previous to the North Pole. Each horizontal  subsequence starts from the value next to the one at azimuth 0 degrees (which was already predicted during the first sequence), until the value previous to it at azimuth close to 360 de- grees. The values are predicted from previous values using either linear prediction of order 1 or 2, or a special prediction mode using also the values available at the previously predicted elevation. Because the number of points ne i-1 at the previously predicted elevation e i-1 is dif-  ferent from the number of points n ei at the currently predicted elevation e i , their azimuths do not match. Therefore, the points ve i-1 ,ai at the previously predicted elevation e i-1 are circularly interpolated to produce n ei new points. For example, if the number of points at the current     elevation is 24, and the number of points at the previous elevation is 27, they are circularly interpolated to produce 24 new points. Interpolation is usually linear to preserve monotonicity. For a given point value to be predicted v ei,ai , the previous point value horizontally v ei,ai-1 and the corresponding previous point value and current point value on the circularly   interpolated new points (which are derived from the previous elevation level) are used as re- gressors to create a predictor with 3 linear prediction coefficients. One option is to use a fixed linear predictor, like which would predict perfectly 2D linear slopes in dB domain.   The fourth sequence 40 also goes horizontally, in order for each elevation, exactly like the third sequence 30, however starting from the one next to the equator towards the South Pole 2 until the one previous to the South Pole 2. For the third and fourth sequences 30 and 40, the en- coder 100 may select the best prediction mode among order 1 prediction, order 2 prediction, and special prediction, the one producing the smallest entropy of the prediction error (predic-  tion residual). In Figs. 1a-1f this fourth stage can be performed by the predictor block 120 of the encoder 100, and its reverse function is performed by the predictor block 210 of the decoder 200. The fifth stage is entropy coding of the prediction residuals, using an adaptive probability esti-  mator of its distribution and optimally coding it using a range encoder. For a small to medium degree step, i.e., 5 degrees to 15 degrees, the prediction errors (prediction residuals) for typical directivities usually have a very small alphabet range, like {-4, … ,4}. This very small alphabet size allows using an adaptive probability estimator directly, to match optimally the arbitrary probability distribution of the prediction error (prediction residual). For a large to very large  degree step, i.e., 18 to 30 degrees, the alphabet size becomes larger, and equal bins of an odd integer size centered on zero can optionally be used to match the overall shape of the probability distribution of the prediction error, while keeping the effective alphabet size small. A value is coded in two stages, first the bin index is coded using an adaptive probability esti- mator, and then the position inside the bin is coded using a uniform probability distribution. The  encoder can select the optimal bin size, the one providing the smallest total entropy. For ex- ample, a bin size of 3 would group values -4, -3, -2 in one bin, values -1, 0, 1 in another bin, and so on. In Figs.1a-1c this fifth stage can be performed by the bitstream writer 120 of the encoder 100, and its reverse function can be performed by the bitstream reader 230 of the decoder 200.   Further embodiments     It is to be mentioned here that all alternatives or aspects as discussed before and all aspects as defined by independent claims in the following claims can be used individually, i.e., without any other alternative or object than the contemplated alternative, object or independent claim. However, in other embodiments, two or more of the alternatives or the aspects or the inde-  pendent claims can be combined with each other and, in other embodiments, all aspects, or alternatives and all independent claims can be combined to each other. An inventively encoded signal can be stored on a digital storage medium or a non-transitory storage medium or can be transmitted on a transmission medium such as a wireless transmis- sion medium or a wired transmission medium such as the Internet.   Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.   Depending on certain implementation requirements, embodiments of the invention can be im- plemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such  that the respective method is performed. Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.   Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods de-  scribed herein, stored on a machine readable carrier or a non-transitory storage medium. In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the com- puter program runs on a computer. A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage  medium, or a computer-readable medium) comprising, recorded thereon, the computer pro- gram for performing one of the methods described herein.     A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be trans- ferred via a data communication connection, for example via the Internet.   A further embodiment comprises a processing means, for example a computer, or a program- mable logic device, configured to or adapted to perform one of the methods described herein. A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein. In some embodiments, a programmable logic device (for example a field programmable gate  array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a micro- processor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus. The above described embodiments are merely illustrative for the principles of the present in-  vention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details pre- sented by way of description and explanation of the embodiments herein.