Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYNTAX AND SEMANTICS FOR WEIGHT UPDATE COMPRESSION OF NEURAL NETWORKS
Document Type and Number:
WIPO Patent Application WO/2022/224069
Kind Code:
A1
Abstract:
An example apparatus, method, and computer program product are provided. The apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; and wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units, and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

Inventors:
REZAZADEGAN TAVAKOLI HAMED (FI)
CRICRÌ FRANCESCO (FI)
AKSU EMRE BARIS (FI)
HANNUKSELA MISKA MATIAS (FI)
Application Number:
PCT/IB2022/053294
Publication Date:
October 27, 2022
Filing Date:
April 07, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
NOKIA TECHNOLOGIES OY (FI)
International Classes:
G06N3/04; H03M7/30
Domestic Patent References:
WO2020014590A12020-01-16
Foreign References:
EP3716158A22020-09-30
Other References:
EMRE AKSU ET AL: "[NNR] Extensions to the high-level bitstream syntax for compressed neural network representations", no. m53719, 15 April 2020 (2020-04-15), XP030287434, Retrieved from the Internet [retrieved on 20200415]
HOMAYUN AFRABANDPAY ET AL: "[NNR] Response to MPEG call for proposal on incremental weight update compression", no. m56604, 18 April 2021 (2021-04-18), XP030295096, Retrieved from the Internet [retrieved on 20210418]
WIEDEMANN SIMON ET AL: "DeepCABAC: A Universal Compression Algorithm for Deep Neural Networks", IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, IEEE, US, vol. 14, no. 4, 24 January 2020 (2020-01-24), pages 700 - 714, XP011805149, ISSN: 1932-4553, [retrieved on 20200810], DOI: 10.1109/JSTSP.2020.2969554
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. An apparatus comprising: at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; and wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

2. The apparatus of claim 1, wherein the one or more mechanisms comprise at least one of: a mechanism to signal an incremental weight update compression mode of operation; a mechanism to introduce a weight update unit type among the at least one information unit; a mechanism to signal mechanisms required for dithering algorithms; a mechanism to signal a global random seed; a mechanism to signal whether a model comprises an inference friendly quantized model; a mechanism to signal incremental weight update quantization algorithms; a mechanism to signal federated averaging weight update algorithm; a mechanism to signal supporting down-stream compression support; a mechanism to signal an asynchronous incremental weight update mode; a mechanism to identify a source of information; a mechanism to identify an operation; a mechanism to define global codebook approaches for a weight update quantization; a mechanism to define extension to one or more data payload types; a mechanism to define extension to a payload; a mechanism to define a syntax and semantics of one or more quantization algorithms; a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; or a mechanism to identify a syntax and semantics relevant to a topology change.

3. The apparatus of claim 2, wherein the mechanism to signal the incremental weight update compression mode of operation comprises an incremental weight update flag to signal or indicates to a decoder that the NNR bitstream is associated with or corresponds to a weight update compression and not a weight compression.

4. The apparatus of claim 3, wherein the incremental weight update flag further signals or indicates to the decoder to invoke an associated decoding mechanism upon receiving a data and decode an associated payload types.

5. The apparatus of claim 2, wherein the mechanism to introduce the weight update unit type among the at least one information unit comprises a weight update compression data unit type comprising information associated with weight update strategies.

6. The apparatus of claim 2, wherein the mechanism to signal dithering algorithms comprises a dithering flag to support dithering techniques in quantization and encoding pipelines.

7. The apparatus of claim 6, wherein the one or more information unit comprises a global random seed used for encoding and decoding computation, when the dithering flag is set.

8. The apparatus of claim 2, wherein the mechanism to signal a global random seed comprises a random seed flag, comprising a global random seed, to be a part of the one or more information unit.

9. The apparatus of claim 2, wherein the mechanism to signal whether a model comprises an inference friendly quantized model comprises an inference friendly flag.

10. The apparatus of claim 2, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantized weight update flag to indicate whether the weight updates are quantized or not.

11. The apparatus of claim 2, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantization algorithm identity to indicate that no quantization algorithm was applied to the weight updates.

12. The apparatus of claim 2, wherein the mechanism to the signal federated averaging weight update algorithm comprises signaling a predetermined federated algorithm identity.

13. The apparatus of claim 2, wherein the mechanism to signal supporting down-stream compression support comprises downstream flag to indicate whether a downstream compression is used, and wherein the downstream refers to the communication direction from a server to one or more client devices.

14. The apparatus of claim 2, wherein the mechanism to signal an asynchronous incremental weight update mode comprises an asynchronous flag to indicate whether a client device is permitted to perform an asynchronous operation, based on the capabilities of the client device.

15. The apparatus of claim 2, wherein the mechanism to a identify the source of information comprises a source identity, wherein the source comprises at least one of a client device or a server.

16. The apparatus of claim 2, wherein the mechanism identify an operation identity comprises used for communication of a specific information.

17. The apparatus of claim 2, wherein the mechanism to define the extension to the one or more data payload types comprises adding an incremental weight update type to a compressed payload data types.

18. The apparatus of claim 2, wherein the mechanism to define the extension to the payload comprises defining an incremental weight update payload comprising semantics and encoded bitstream of predetermined algorithm.

19. The apparatus of claim 2, wherein the mechanism to define the syntax and semantics of one or more quantization algorithms comprises using a sign stochastic gradient descent (sgd) quantization to generate a bitmask indicating changes in the weight update compression.

20. The apparatus of claim 19, wherein a payload for the sign sgd quantization comprises a sign sgd quantization payload.

21. The apparatus of claim 2, wherein the mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs comprises a run-length encoding or decoding mechanism, a position or length encoding or decoding mechanism, or a golomb encoding/decoding mechanism.

22. The apparatus of claim 2, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises using a topology container to signal changes in a topology, when an incremental weight update flag is set.

23. The apparatus of claim 2, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises a topology weight update container for storing a topology format to indicate a topology update associated with a weight update.

24. The apparatus of claim 2, wherein a required payload and decoding procedures are invoked when the topology weight update container is present in a topology unit payload.

25. The apparatus of claim 2, wherein a required payload comprises one or more of: a number element identity comprising a number of elements for which a topology modification is signaled; an element identity comprising an array of identifiers, wherein each identifier is associated with an element that is modified due the topology update; a weight tensor dimension comprising a list comprising one or more lists, wherein each list of the one or more list comprises updated dimensions of a weight vector associated with the element identity; a reorganize flag to indicate whether an existing weight vector is reorganized according to the updated dimensions or an associated weight vector, wherein when the reorganize flag signals a reorganization the payload contains a mapping to indicate how an updated weight tensor is obtained from an existing weight tensor; a weight mapping indicates how an existing weight is mapped to an updated topology element; or topology compressed is used to indicate whether information associated with the topology update is capable of being compressed or follows a specific encoding and decoding procedure to be invoked in order to decode the topology information.

26. A method comprising: encoding or decoding a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

27. The method of claim 26, wherein the one or more mechanisms comprise at least one of: a mechanism to signal an incremental weight update compression mode of operation; a mechanism to introduce a weight update unit type among the at least one information unit; a mechanism to signal mechanisms required for dithering algorithms; a mechanism to signal a global random seed; a mechanism to signal whether a model comprises an inference friendly quantized model; a mechanism to signal incremental weight update quantization algorithms; a mechanism to signal federated averaging weight update algorithm; a mechanism to signal supporting down-stream compression support; a mechanism to signal an asynchronous incremental weight update mode; a mechanism to identify a source of information; a mechanism to identify an operation; a mechanism to define global codebook approaches for a weight update quantization; a mechanism to define extension to one or more data payload types; a mechanism to define extension to a payload; a mechanism to define a syntax and semantics of one or more quantization algorithms; a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; or a mechanism to identify a syntax and semantics relevant to a topology change.

28. The method of claim 27, wherein the mechanism to signal the incremental weight update compression mode of operation comprises an incremental weight update flag to signal or indicates to a decoder that the NNR bitstream is associated with or corresponds to a weight update compression and not a weight compression.

29. The method of claim 28, wherein the incremental weight update flag further signals or indicates to the decoder to invoke an associated decoding mechanism upon receiving a data and decode an associated payload types.

30. The method of claim 27, wherein the mechanism to introduce the weight update unit type among the at least one information unit comprises a weight update compression data unit type comprising information associated with weight update strategies.

31. The method of claim 27, wherein the mechanism to signal dithering algorithms comprises a dithering flag to support dithering techniques in quantization and encoding pipelines.

32. The method of claim 31, wherein the one or more information unit comprises a global random seed used for encoding and decoding computation, when the dithering flag is set.

33. The method of claim 27, wherein the mechanism to signal a global random seed comprises a random seed flag, comprising a global random seed, to be a part of the one or more information unit.

34. The method of claim 27, wherein the mechanism to signal whether a model comprises an inference friendly quantized model comprises an inference friendly flag.

35. The method of claim 27, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantized weight update flag to indicate whether the weight updates are quantized or not.

36. The method of claim 27, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantization algorithm identity to indicate that no quantization algorithm was applied to the weight updates.

37. The method of claim 27, wherein the mechanism to the signal federated averaging weight update algorithm comprises signaling a predetermined federated algorithm identity.

38. The method of claim 27, wherein the mechanism to signal supporting down-stream compression support comprises downstream flag to indicate whether a downstream compression is used, and wherein the downstream refers to the communication direction from a server to one or more client devices.

39. The method of claim 27, wherein the mechanism to signal an asynchronous incremental weight update mode comprises an asynchronous flag to indicate whether a client device is permitted to perform an asynchronous operation, based on the capabilities of the client device.

40. The method of claim 27, wherein the mechanism to a identify the source of information comprises a source identity, wherein the source comprises at least one of a client device or a server.

41. The method of claim 27, wherein the mechanism identify an operation identity comprises used for communication of a specific information.

42. The method of claim 27, wherein the mechanism to define the extension to the one or more data payload types comprises adding an incremental weight update type to a compressed payload data types.

43. The method of claim 27, wherein the mechanism to define the extension to the payload comprises defining an incremental weight update payload comprising semantics and encoded bitstream of predetermined algorithm.

44. The method of claim 27, wherein the mechanism to define the syntax and semantics of one or more quantization algorithms comprises using a sign stochastic gradient descent (sgd) quantization to generate a bitmask indicating changes in the weight update compression.

45. The method of claim 44, wherein a payload for the sign sgd quantization comprises a sign sgd quantization payload.

46. The method of claim 27, wherein the mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs comprises a run-length encoding or decoding mechanism, a position or length encoding or decoding mechanism, or a golomb encoding/decoding mechanism.

47. The method of claim 27, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises using a topology container to signal changes in a topology, when an incremental weight update flag is set.

48. The method of claim 27, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises a topology weight update container for storing a topology format to indicate a topology update associated with a weight update.

49. The method of claim 27, wherein a required payload and decoding procedures are invoked when the topology weight update container is present in a topology unit payload.

50. The method of claim 27, wherein a required payload comprises one or more of: a number element identity comprising a number of elements for which a topology modification is signaled; an element identity comprising an array of identifiers, wherein each identifier is associated with an element that is modified due the topology update; a weight tensor dimension comprising a list comprising one or more lists, wherein each list of the one or more list comprises updated dimensions of a weight vector associated with the element identity; a reorganize flag to indicate whether an existing weight vector is reorganized according to the updated dimensions or an associated weight vector, wherein when the reorganize flag signals a reorganization the payload contains a mapping to indicate how an updated weight tensor is obtained from an existing weight tensor; a weight mapping indicates how an existing weight is mapped to an updated topology element; or topology compressed is used to indicate whether information associated with the topology update is capable of being compressed or follows a specific encoding and decoding procedure to be invoked in order to decode the topology information.

51. A computer readable medium comprising program instructions for causing an apparatus to perform at least the following: encoding or decoding a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

52. The computer readable medium of claim 51, wherein the computer readable medium comprises a non-transitory computer readable medium.

53. The computer readable medium of any of claims 51 or 52, wherein the computer readable medium further causes the apparatus to perform the methods as claimed in any of the claims 26 to 50.

54. An apparatus comprising: at least one processor; and at least one non-transitory memory including computer program code, wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: define a validation set performance, wherein the validation set performance comprises or specifies one or more of the following: a performance indication determined based on a validation set; or indication of a performance level achieved by a weight-update associated with the validation set performance.

55. The apparatus of claim 54, wherein the apparatus is further caused to: determine the performance level based on a validation dataset present at a client side.

56. The apparatus of any of claims 54 or 55, wherein the validation set performance provides information on how to use the weight-update received from a device.

57. The apparatus of claim 56, wherein the apparatus is caused to multiply the weight-updates by using multiplier values derived from the validation set performance values received from the device.

58. The apparatus of claim any of the previous claims, wherein the apparatus is caused to define a weight reference ID, wherein the weight reference ID uniquely identifies weights for a base model.

59. The apparatus of claim any of the previous claims, wherein the apparatus is further caused to define a source ID, wherein the source ID uniquely identifies a source of information.

60. The apparatus of claim 59, wherein the source ID comprises a flag, wherein the source ID is interpreted as a communication direction or as a string identifier for providing detailed information regarding the source.

61. The apparatus of the any of the claims 59 or 60, previous wherein the device comprises the source.

62. The apparatus of any of the previous claims, wherein the device comprises at least one of a client or a server.

63. The apparatus of any of the previous claims, wherein the apparatus comprises the device.

64. A method comprising: defining a validation set performance, wherein the validation set performance comprises or specifies one or more of the following: a performance indication determined based on a validation set; or indication of a performance level achieved by a weight-update associated with the validation set performance.

65. The method of claim 64 further comprising determine the performance level based on a validation dataset present at a client side.

66. The method of any of claims 64 or 65, wherein the validation set performance provides information on how to use the weight-update received from a device.

67. The method of claim 66 further comprising multiplying the weight-updates by using multiplier values derived from the validation set performance values received from the device.

68. The method of claim any of the previous claims further comprising defining a weight reference ID, wherein the weight reference ID uniquely identifies weights for a base model.

69. The method of claim any of the previous claims further comprising defining a source ID, wherein the source ID uniquely identifies a source of information.

70. The method of claim 69, wherein the source ID comprises a flag, wherein the source ID is interpreted as a communication direction or as a string identifier for providing detailed information regarding the source.

71. The method of the any of the claims 69 or 70, previous wherein the device comprises the source.

72. The method of any of the previous claims, wherein the device comprises at least one of a client or a server.

73. A computer readable medium comprising program instructions for causing an apparatus to perform at least the following: defining a validation set performance, wherein the validation set performance comprises or specifies one or more of the following: a performance indication determined based on a validation set; or indication of a performance level achieved by a weight-update associated with the validation set performance.

74. The computer readable medium of claim 51, wherein the computer readable medium comprises a non-transitory computer readable medium.

75. The computer readable medium of any of claims 73 or 74, wherein the computer readable medium further causes the apparatus to perform the methods as claimed in any of the claims 64 to 72.

Description:
SYNTAX AND SEMANTICS FOR WEIGHT UPDATE COMPRESSION OF NEURAL NETWORKS

TECHNICAL FIELD

[0001] The examples and non-limiting embodiments relate generally to multimedia transport and neural networks, and more particularly, to syntax and semantics for incremental weight update compression of neural networks.

BACKGROUND

[0002] It is known to provide standardized formats for exchange of neural networks.

SUMMARY

[0003] An example apparatus includes at least one processor; and at least one non- transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; and wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[0004] The example apparatus may further include, wherein the one or more mechanisms comprise at least one of: a mechanism to signal an incremental weight update compression mode of operation; a mechanism to introduce a weight update unit type among the at least one information unit; a mechanism to signal mechanisms required for dithering algorithms; a mechanism to signal a global random seed; a mechanism to signal whether a model comprises an inference friendly quantized model; a mechanism to signal incremental weight update quantization algorithms; a mechanism to signal federated averaging weight update algorithm; a mechanism to signal supporting down-stream compression support; a mechanism to signal an asynchronous incremental weight update mode; a mechanism to identify a source of information; a mechanism to identify an operation; a mechanism to define global codebook approaches for a weight update quantization; a mechanism to define extension to one or more data payload types; a mechanism to define extension to a payload; a mechanism to define a syntax and semantics of one or more quantization algorithms; a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; or a mechanism to identify a syntax and semantics relevant to a topology change.

[0005] The example apparatus may further include, wherein the mechanism to signal the incremental weight update compression mode of operation comprises an incremental weight update flag to signal or indicates to a decoder that the NNR bitstream is associated with or corresponds to a weight update compression and not a weight compression.

[0006] The example apparatus may further include, wherein the incremental weight update flag further signals or indicates to the decoder to invoke an associated decoding mechanism upon receiving a data and decode an associated payload types.

[0007] The example apparatus may further include, wherein the mechanism to introduce the weight update unit type among the at least one information unit comprises a weight update compression data unit type comprising information associated with weight update strategies.

[0008] The example apparatus may further include, wherein the at least one information unit comprises at least one NNR unit type.

[0009] The example apparatus may further include, wherein the mechanism to signal dithering algorithms comprises a dithering flag to support dithering techniques in quantization and encoding pipelines.

[0010] The example apparatus may further include, wherein the one or more information unit comprises a global random seed used for encoding and decoding computation, when the dithering flag is set.

[0011] The example apparatus may further include, wherein the mechanism to signal a global random seed comprises a random seed flag, comprising a global random seed, to be a part of the one or more information unit.

[0012] The example apparatus may further include, wherein the mechanism to signal whether a model comprises an inference friendly quantized model comprises an inference friendly flag. [0013] The example apparatus may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantized weight update flag to indicate whether the weight updates are quantized or not.

[0014] The example apparatus may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantization algorithm identity to indicate that no quantization algorithm was applied to the weight updates.

[0015] The example apparatus may further include, wherein the mechanism to the signal federated averaging weight update algorithm comprises signaling a predetermined federated algorithm identity.

[0016] The example apparatus may further include, wherein the mechanism to signal supporting down-stream compression support comprises downstream flag to indicate whether a downstream compression is used, and wherein the downstream refers to the communication direction from a server to one or more client devices.

[0017] The example apparatus may further include, wherein the mechanism to signal an asynchronous incremental weight update mode comprises an asynchronous flag to indicate whether a client device is permitted to perform an asynchronous operation, based on the capabilities of the client device.

[0018] The example apparatus may further include, wherein the mechanism to a identify the source of information comprises a source identity, wherein the source comprises at least one of a client device or a server.

[0019] The example apparatus may further include, wherein the mechanism identify an operation identity comprises used for communication of a specific information.

[0020] The example apparatus may further include, wherein the mechanism to define the extension to the one or more data payload types comprises adding an incremental weight update type to a compressed payload data types. [0021] The example apparatus may further include, wherein the mechanism to define the extension to the payload comprises defining an incremental weight update payload comprising semantics and encoded bitstream of predetermined algorithm.

[0022] The example apparatus may further include, wherein the mechanism to define the syntax and semantics of one or more quantization algorithms comprises using a sign stochastic gradient descent (sgd) quantization to generate a bitmask indicating changes in the weight update compression.

[0023] The example apparatus may further include, wherein a payload for the sign sgd quantization comprises a sign sgd quantization payload.

[0024] The example apparatus may further include, wherein the mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs comprises a run-length encoding or decoding mechanism, a position or length encoding or decoding mechanism, or a golomb encoding/decoding mechanism.

[0025] The example apparatus may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises using a topology container to signal changes in a topology, when an incremental weight update flag is set.

[0026] The example apparatus may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises a topology weight update container for storing a topology format to indicate a topology update associated with a weight update.

[0027] The example apparatus may further include, wherein a required payload and decoding procedures are invoked when the topology weight update container is present in a topology unit payload.

[0028] The example apparatus may further include, wherein a required payload comprises one or more of: a number element identity comprising a number of elements for which a topology modification is signaled; an element identity comprising an array of identifiers, wherein each identifier is associated with an element that is modified due the topology update; a weight tensor dimension comprising a list comprising one or more lists, wherein each list of the one or more list comprises updated dimensions of a weight vector associated with the element identity; a reorganize flag to indicate whether an existing weight vector is reorganized according to the updated dimensions or an associated weight vector, wherein when the reorganize flag signals a reorganization the payload contains a mapping to indicate how an updated weight tensor is obtained from an existing weight tensor; a weight mapping indicates how an existing weight is mapped to an updated topology element; or topology compressed is used to indicate whether information associated with the topology update is capable of being compressed or follows a specific encoding and decoding procedure to be invoked in order to decode the topology information.

[0029] An example method includes encoding or decoding a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[0030] The example method may further include, wherein the one or more mechanisms comprise at least one of: a mechanism to signal an incremental weight update compression mode of operation; a mechanism to introduce a weight update unit type among the at least one information unit; a mechanism to signal mechanisms required for dithering algorithms; a mechanism to signal a global random seed; a mechanism to signal whether a model comprises an inference friendly quantized model; a mechanism to signal incremental weight update quantization algorithms; a mechanism to signal federated averaging weight update algorithm; a mechanism to signal supporting down-stream compression support; a mechanism to signal an asynchronous incremental weight update mode; a mechanism to identify a source of information; a mechanism to identify an operation; a mechanism to define global codebook approaches for a weight update quantization; a mechanism to define extension to one or more data payload types; a mechanism to define extension to a payload; a mechanism to define a syntax and semantics of one or more quantization algorithms; a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; or a mechanism to identify a syntax and semantics relevant to a topology change.

[0031] The example method may further include, wherein the mechanism to signal the incremental weight update compression mode of operation comprises an incremental weight update flag to signal or indicates to a decoder that the NNR bitstream is associated with or corresponds to a weight update compression and not a weight compression.

[0032] The example method may further include, wherein the incremental weight update flag further signals or indicates to the decoder to invoke an associated decoding mechanism upon receiving a data and decode an associated payload types.

[0033] The example method may further include, wherein the mechanism to introduce the weight update unit type among the at least one information unit comprises a weight update compression data unit type comprising information associated with weight update strategies.

[0034] The example method may further include, wherein the at least one information unit includes at least one NNR unit type.

[0035] The example method may further include, wherein the mechanism to signal dithering algorithms comprises a dithering flag to support dithering techniques in quantization and encoding pipelines.

[0036] The example method may further include, wherein the one or more information unit comprises a global random seed used for encoding and decoding computation, when the dithering flag is set.

[0037] The example method may further include, wherein the mechanism to signal a global random seed comprises a random seed flag, comprising a global random seed, to be a part of the one or more information unit.

[0038] The example method may further include, wherein the mechanism to signal whether a model comprises an inference friendly quantized model comprises an inference friendly flag.

[0039] The example method may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantized weight update flag to indicate whether the weight updates are quantized or not. [0040] The example method may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantization algorithm identity to indicate that no quantization algorithm was applied to the weight updates.

[0041] The example method may further include, wherein the mechanism to the signal federated averaging weight update algorithm comprises signaling a predetermined federated algorithm identity.

[0042] The example method may further include, wherein the mechanism to signal supporting down-stream compression support comprises downstream flag to indicate whether a downstream compression is used, and wherein the downstream refers to the communication direction from a server to one or more client devices.

[0043] The example method may further include, wherein the mechanism to signal an asynchronous incremental weight update mode comprises an asynchronous flag to indicate whether a client device is permitted to perform an asynchronous operation, based on the capabilities of the client device.

[0044] The example method may further include, wherein the mechanism to a identify the source of information comprises a source identity, wherein the source comprises at least one of a client device or a server.

[0045] The example method may further include, wherein the mechanism identify an operation identity comprises used for communication of a specific information.

[0046] The example method may further include, wherein the mechanism to define the extension to the one or more data payload types comprises adding an incremental weight update type to a compressed payload data types.

[0047] The example method may further include, wherein the mechanism to define the extension to the payload comprises defining an incremental weight update payload comprising semantics and encoded bitstream of predetermined algorithm.

[0048] The example method may further include, wherein the mechanism to define the syntax and semantics of one or more quantization algorithms comprises using a sign stochastic gradient descent (sgd) quantization to generate a bitmask indicating changes in the weight update compression.

[0049] The example method may further include, wherein a payload for the sign sgd quantization comprises a sign sgd quantization payload.

[0050] The example method may further include, wherein the mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs comprises a run-length encoding or decoding mechanism, a position or length encoding or decoding mechanism, or a golomb encoding/decoding mechanism.

[0051] The example method may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises using a topology container to signal changes in a topology, when an incremental weight update flag is set.

[0052] The example method may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises a topology weight update container for storing a topology format to indicate a topology update associated with a weight update.

[0053] The example method may further include, wherein a required payload and decoding procedures are invoked when the topology weight update container is present in a topology unit payload.

[0054] The example method may further include, wherein a required payload comprises one or more of: a number element identity comprising a number of elements for which a topology modification is signaled; an element identity comprising an array of identifiers, wherein each identifier is associated with an element that is modified due the topology update; a weight tensor dimension comprising a list comprising one or more lists, wherein each list of the one or more list comprises updated dimensions of a weight vector associated with the element identity; a reorganize flag to indicate whether an existing weight vector is reorganized according to the updated dimensions or an associated weight vector, wherein when the reorganize flag signals a reorganization the payload contains a mapping to indicate how an updated weight tensor is obtained from an existing weight tensor; a weight mapping indicates how an existing weight is mapped to an updated topology element; or topology compressed is used to indicate whether information associated with the topology update is capable of being compressed or follows a specific encoding and decoding procedure to be invoked in order to decode the topology information.

[0055] An example computer readable medium includes program instructions for causing an apparatus to perform at least the following: encoding or decoding a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[0056] The example computer readable medium may further include, wherein the computer readable medium comprises a non-transitory computer readable medium.

[0057] The example computer readable medium may further include, wherein the computer readable medium further causes the apparatus to perform the methods as described in any of the claims previous paragraphs.

BRIEF DESCRIPTION OF THE DRAWINGS

[0058] The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings, wherein:

[0059] FIG. 1 shows schematically an electronic device employing embodiments of the examples described herein.

[0060] FIG. 2 shows schematically a user equipment suitable for employing embodiments of the examples described herein.

[0061] FIG. 3 further shows schematically electronic devices employing embodiments of the examples described herein connected using wireless and wired network connections.

[0062] FIG. 4 shows schematically a block chart of an encoder on a general level. [0063] FIG. 5 is a block diagram showing an interface between an encoder and a decoder in accordance with the examples described herein.

[0064] FIG. 6 illustrates a system configured to support streaming of media data from a source to a client device.

[0065] FIG. 7 is a block diagram of an apparatus that may be configured in accordance with an example embodiment.

[0066] FIG. 8 illustrates example structure of a neural network representation (NNR) bitstream and an NNR unit, in accordance with an embodiment.

[0067] FIG. 9 is an example apparatus configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream, in accordance with an embodiment.

[0068] FIG. 10 is an example method for introducing a weight update compression interpretation into the NNR bitstream, in accordance with an embodiment.

[0069] FIG. 11 is an example method 1100 for defining a validation set performance, in accordance with an embodiment.

[0070] FIG. 12 is a block diagram of one possible and non-limiting system in which the example embodiments may be practiced.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

[0071] The following acronyms and abbreviations that may be found in the specification and/or the drawing figures are defined as follows:

3GP 3GPP file format

3GPP 3rd Generation Partnership Project

3GPP TS 3GPP technical specification

4CC four character code 4G fourth generation of broadband cellular network technology

5G fifth generation cellular network technology

5GC 5G core network

ACC accuracy

AI artificial intelligence

AIoT AI-enabled IoT a.k.a. also known as

AMF access and mobility management function

A VC advanced video coding

CABAC context-adaptive binary arithmetic coding

CDMA code-division multiple access

CE core experiment

CU central unit

DASH dynamic adaptive streaming over HTTP

DCT discrete cosine transform

DSP digital signal processor

DU distributed unit eNB (or eNodeB) evolved Node B (for example, an FTE base station) EN-DC E-UTRA-NR dual connectivity en-gNB or En-gNB node providing NR user plane and control plane protocol terminations towards the UE, and acting as secondary node in EN-DC

E-UTRA evolved universal terrestrial radio access, for example, the

LTE radio access technology

FDMA frequency division multiple access f(n) fixed-pattern bit string using n bits written (from left to right) with the left bit first.

FI or Fl-C interface between CU and DU control interface gNB (or gNodeB) base station for 5G/NR, for example, a node providing

NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface to the 5GC

GSM Global System for Mobile communications

H.222.0 MPEG-2 Systems is formally known as ISO/IEC 13818-1 and as ITU-T Rec. H.222.0 H.26x family of video coding standards in the domain of the ITU-T

HLS high level syntax

IBC intra block copy

ID identifier

IEC International Electrotechnical Commission

IEEE Institute of Electrical and Electronics Engineers

I/F interface

IMD integrated messaging device

IMS instant messaging service

IoT internet of things

IP internet protocol

ISO International Organization for Standardization

ISOBMFF ISO base media file format

ITU International Telecommunication Union

ITU-T ITU Telecommunication Standardization Sector

JPEG joint photographic experts group

LTE long-term evolution

LZMA Lempel-Ziv-Markov chain compression

LZMA2 simple container format that can include both uncompressed data and LZMA data

LZO Lempel-Ziv-Oberhumer compression

LZW Lempel-Ziv-Welch compression

MAC medium access control mdat MediaDataBox

MME mobility management entity

MMS multimedia messaging service moov MovieBox

MP4 file format for MPEG-4 Part 14 files

MPEG moving picture experts group

MPEG-2 H.222/H.262 as defined by the ITU

MPEG-4 audio and video coding standard for ISO/IEC 14496

MSB most significant bit

NAL network abstraction layer

NDU NN compressed data unit ng or NG new generation ng-eNB or NG-eNB new generation eNB

NN neural network

NNEF neural network exchange format

NNR neural network representation

NR new radio (5G radio)

N/W or NW network

ONNX Open Neural Network exchange

PB protocol buffers

PC personal computer

PDA personal digital assistant

PDCP packet data convergence protocol

PHY physical layer

PID packet identifier

PLC power line communication

PNG portable network graphics

PSNR peak signal-to-noise ratio

RAM random access memory

RAN radio access network

RFC request for comments

RFID radio frequency identification

RLC radio link control

RRC radio resource control

RRH remote radio head

RU radio unit

Rx receiver

SDAP service data adaptation protocol sgd sign stochastic gradient descent

SGW serving gateway

SMF session management function

SMS short messaging service st(v) null -terminated string encoded as UTF-8 characters as specified in ISO/IEC 10646

SVC scalable video coding

SI interface between eNodeBs and the EPC

TCP-IP transmission control protocol-internet protocol

TDMA time divisional multiple access trak TrackBox

TS transport stream

TUC technology under consideration

TV television

Tx transmitter

UE user equipment ue(v) unsigned integer Exp-Golomb-coded syntax element with the left bit first

UICC Universal Integrated Circuit Card

UMTS Universal Mobile Telecommunications System u(n) unsigned integer using n bits

UPF user plane function

URI uniform resource identifier

URL uniform resource locator

UTF-8 8-bit Unicode Transformation Format

WLAN wireless local area network

X2 interconnecting interface between two eNodeBs in LTE network

Xn interface between two NG-RAN nodes

[0072] Some embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms ‘data,’ ‘content,’ ‘information,’ and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.

[0073] Additionally, as used herein, the term ‘circuitry’ refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor(s) or a portion of a microprocessor! s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term ‘circuitry’ also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term ‘circuitry’ as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device.

[0074] As defined herein, a ‘computer-readable storage medium,’ which refers to a non- transitory physical storage medium (e.g., volatile or non-volatile memory device), can be differentiated from a ‘computer-readable transmission medium,’ which refers to an electromagnetic signal.

[0075] A method, apparatus and computer program product are provided in accordance with an example embodiment in order to implement one or more mechanisms for introducing a weight update compression interpretation into the neural network representation (NNR) bitstream.

[0076] The following describes in detail suitable apparatus and possible implementation of one or more mechanisms for introducing a weight update compression interpretation into the neural network representation (NNR) bitstream. In this regard reference is first made to FIG. 1 and FIG. 2, where FIG. 1 shows an example block diagram of an apparatus 50. The apparatus may be an Internet of Things (IoT) apparatus configured to perform various functions, for example, gathering information by one or more sensors, receiving or transmitting information, analyzing information gathered or received by the apparatus, or the like. The apparatus may comprise a video coding system, which may incorporate a codec. FIG. 2 shows a layout of an apparatus according to an example embodiment. The elements of FIG. 1 and FIG. 2 will be explained next.

[0077] The electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system, a sensor device, a tag, or a lower power device. However, it would be appreciated that embodiments of the examples described herein may be implemented within any electronic device or apparatus which may process data by neural networks. [0078] The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 may further comprise a display 32, e.g., in the form of a liquid crystal display, light emitting diode display, organic light emitting diode display, and the like. In other embodiments of the examples described herein the display may be any suitable display technology suitable to display media or multimedia content, for example, an image or a video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the examples described herein any suitable data or user interface mechanism may be employed. For example, the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.

[0079] The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the examples described herein may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery (or in other embodiments of the examples described herein the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The apparatus may further comprise a camera capable of recording or capturing images and/or video. The apparatus 50 may further comprise an infrared port for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection.

[0080] The apparatus 50 may comprise a controller 56, a processor or processor circuitry for controlling the apparatus 50. The controller 56 may be connected to a memory 58 which in embodiments of the examples described herein may store both data in the form of image, audio data, video data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and/or decoding of audio, image, and/or video data or assisting in coding and/or decoding carried out by the controller.

[0081] The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network. [0082] The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals, for example, for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and/or for receiving radio frequency signals from other apparatus(es).

[0083] The apparatus 50 may comprise a camera 42 capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing. The apparatus may receive the video image data for processing from another device prior to transmission and/or storage. The apparatus 50 may also receive either wirelessly or by a wired connection the image for coding/decoding. The structural elements of apparatus 50 described above represent examples of means for performing a corresponding function.

[0084] With respect to FIG. 3, an example of a system within which embodiments of the examples described herein can be utilized is shown. The system 10 comprises multiple communication devices which can communicate through one or more networks. The system 10 may comprise any combination of wired or wireless networks including, but not limited to a wireless cellular telephone network (such as a GSM, UMTS, CDMA, LTE, 4G, 5G network, and the like), a wireless local area network (WLAN) such as defined by any of the IEEE 802.x standards, a Bluetooth® personal area network, an Ethernet local area network, a token ring local area network, a wide area network, and the Internet.

[0085] The system 10 may include both wired and wireless communication devices and/or apparatus 50 suitable for implementing embodiments of the examples described herein.

[0086] For example, the system shown in FIG. 3 shows a mobile telephone network 11 and a representation of the Internet 28. Connectivity to the Internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways.

[0087] The example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22. The apparatus 50 may be stationary or mobile when carried by an individual who is moving. The apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.

[0088] The embodiments may also be implemented in a set-top box; for example, a digital TV receiver, which may/may not have a display or wireless capabilities, in tablets or (laptop) personal computers (PC), which have hardware and/or software to process neural network data, in various operating systems, and in chipsets, processors, DSPs and/or embedded systems offering hardware/software based coding.

[0089] Some or further apparatus may send and receive calls and messages and communicate with service providers through a wireless connection 25 to a base station 24. The base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28. The system may include additional communication devices and communication devices of various types.

[0090] The communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol-internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, 3GPP Narrowband IoT and any similar wireless communication technology. A communications device involved in implementing various embodiments of the examples described herein may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection.

[0091] In telecommunications and data networks, a channel may refer either to a physical channel or to a logical channel. A physical channel may refer to a physical transmission medium such as a wire, whereas a logical channel may refer to a logical connection over a multiplexed medium, capable of conveying several logical channels. A channel may be used for conveying an information signal, for example a bitstream, from one or several senders (or transmitters) to one or several receivers. [0092] The embodiments may also be implemented in so-called internet of things (IoT) devices. The IoT may be defined, for example, as an interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. The convergence of various technologies has and may enable many fields of embedded systems, such as wireless sensor networks, control systems, home/building automation, and the like, to be included the Internet of Things (IoT). In order to utilize Internet IoT devices are provided with an IP address as a unique identifier. The IoT devices may be provided with a radio transmitter, such as WLAN or Bluetooth transmitter or a RFID tag. Alternatively, IoT devices may have access to an IP- based network via a wired network, such as an Ethernet-based network or a power-line connection (PLC).

[0093] An MPEG-2 transport stream (TS), specified in ISO/IEC 13818-1 or equivalently in ITU-T Recommendation H.222.0, is a format for carrying audio, video, and other media as well as program metadata or other metadata, in a multiplexed stream. A packet identifier (PID) is used to identify an elementary stream (a.k.a. packetized elementary stream) within the TS. Hence, a logical channel within an MPEG-2 TS may be considered to correspond to a specific PID value.

[0094] Available media file format standards include ISO base media file format (ISO/IEC 14496-12, which may be abbreviated ISOBMFF) and file format for NAL unit structured video (ISO/IEC 14496-15), which derives from the ISOBMFF.

[0095] Video codec consists of an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form. A video encoder and/or a video decoder may also be separate from each other, for example, need not form a codec. Typically, encoder discards some information in the original video sequence in order to represent the video in a more compact form (e.g., at lower bitrate).

[0096] Typical hybrid video encoders, for example, many encoder implementations of ITU-T H.263 and H.264, encode the video information in two phases. Firstly pixel values in a certain picture area (or ‘block’) are predicted, for example, by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, for example, the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (for example, Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).

[0097] In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures). In intra block copy (IBC; a.k.a. intra-block-copy prediction and current picture referencing), prediction is applied similarly to temporal prediction, but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter-layer or inter- view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter-view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion- compensated prediction.

[0098] Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, reduces temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures. Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, for example, either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.

[0099] One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently when they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.

[00100] FIG. 4 shows a block diagram of a general structure of a video encoder. FIG. 4 presents an encoder for two layers, but it would be appreciated that presented encoder may be similarly extended to encode more than two layers. FIG. 4 illustrates a video encoder comprising a first encoder section 500 for a base layer and a second encoder section 502 for an enhancement layer. Each of the first encoder section 500 and the second encoder section 502 may comprise similar elements for encoding incoming pictures. The encoder sections 500, 502 may comprise a pixel predictor 302, 402, prediction error encoder 303, 403 and prediction error decoder 304, 404. FIG. 4 also shows an embodiment of the pixel predictor 302, 402 as comprising an inter-predictor 306, 406, an intra-predictor 308, 408, a mode selector 310, 410, a filter 316, 416, and a reference frame memory 318, 418. The pixel predictor 302 of the first encoder section 500 receives base layer image(s) 300 of a video stream to be encoded at both the inter-predictor 306 (which determines the difference between the image and a motion compensated reference frame ) and the intra-predictor 308 (which determines a prediction for an image block based only on the already processed parts of current frame or picture). The output of both the inter-predictor and the intra-predictor are passed to the mode selector 310. The intra-predictor 308 may have more than one intra-prediction modes. Flence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 310. The mode selector 310 also receives a copy of the base layer image 300. Correspondingly, the pixel predictor 402 of the second encoder section 502 receives enhancement layer image(s) 400 of a video stream to be encoded at both the inter-predictor 406 (which determines the difference between the image and a motion compensated reference frame) and the intra-predictor 408 (which determines a prediction for an image block based only on the already processed parts of current frame or picture). The output of both the inter-predictor and the intra-predictor are passed to the mode selector 410. The intra-predictor 408 may have more than one intra prediction modes. Flence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 410. The mode selector 410 also receives a copy of the enhancement layer picture 400.

[00101] Depending on which encoding mode is selected to encode the current block, the output of the inter-predictor 306, 406 or the output of one of the optional intra-predictor modes or the output of a surface encoder within the mode selector is passed to the output of the mode selector 310, 410. The output of the mode selector 310, 410 is passed to a first summing device 321, 421. The first summing device may subtract the output of the pixel predictor 302, 402 from the base layer image 300/enhancement layer image 400 to produce a first prediction error signal 320, 420 which is input to the prediction error encoder 303, 403.

[00102] The pixel predictor 302, 402 further receives from a preliminary reconstructor 339, 439 the combination of the prediction representation of the image block 312, 412 and the output 338, 438 of the prediction error decoder 304, 404. The preliminary reconstructed image 314, 414 may be passed to the intra-predictor 308, 408 and to a filter 316, 416. The filter 316, 416 receiving the preliminary representation may filter the preliminary representation and output a final reconstructed image 340, 440 which may be saved in a reference frame memory 318, 418. The reference frame memory 318 may be connected to the inter-predictor 306 to be used as the reference image against which a future base layer image 300 is compared in inter-prediction operations. Subject to the base layer being selected and indicated to be source for inter-layer sample prediction and/or inter-layer motion information prediction of the enhancement layer according to some embodiments, the reference frame memory 318 may also be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer images 400 is compared in inter-prediction operations. Moreover, the reference frame memory 418 may be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer image 400 is compared in inter-prediction operations.

[00103] Filtering parameters from the filter 316 of the first encoder section 500 may be provided to the second encoder section 502 subject to the base layer being selected and indicated to be source for predicting the filtering parameters of the enhancement layer according to some embodiments.

[00104] The prediction error encoder 303, 403 comprises a transform unit 342, 442 and a quantizer 344, 444. The transform unit 342, 442 transforms the first prediction error signal 320, 420 to a transform domain. The transform is, for example, the DCT transform. The quantizer 344, 444 quantizes the transform domain signal, for example, the DCT coefficients, to form quantized coefficients.

[00105] The prediction error decoder 304, 404 receives the output from the prediction error encoder 303, 403 and performs the opposite processes of the prediction error encoder 303, 403 to produce a decoded prediction error signal 338, 438 which, when combined with the prediction representation of the image block 312, 412 at the second summing device 339, 439, produces the preliminary reconstructed image 314, 414. The prediction error decoder may be considered to comprise a dequantizer 346, 446, which dequantizes the quantized coefficient values, for example, DCT coefficients, to reconstruct the transform signal and an inverse transformation unit 348, 448, which performs the inverse transformation to the reconstructed transform signal wherein the output of the inverse transformation unit 348, 448 contains reconstructed block(s). The prediction error decoder may also comprise a block filter which may filter the reconstructed block(s) according to further decoded information and filter parameters.

[00106] The entropy encoder 330, 430 receives the output of the prediction error encoder 303, 403 and may perform a suitable entropy encoding/variable length encoding on the signal to provide error detection and correction capability. The outputs of the entropy encoders 330, 430 may be inserted into a bitstream, for example, by a multiplexer 508.

[00107] FIG. 5 is a block diagram showing the interface between an encoder 501 implementing neural network encoding 503, and a decoder 504 implementing neural network decoding 505 in accordance with the examples described herein. The encoder 501 may embody a device, software method or hardware circuit. The encoder 501 has the goal of compressing input data 511 (for example, an input video) to compressed data 512 (for example, a bitstream) such that the bitrate is minimized, and the accuracy of an analysis or processing algorithm is maximized. To this end, the encoder 501 uses an encoder or compression algorithm, for example to perform neural network encoding 503.

[00108] The general analysis or processing algorithm may be part of the decoder 504. The decoder 504 uses a decoder or decompression algorithm, for example to perform the neural network decoding 505 to decode the compressed data 512 (for example, compressed video) which was encoded by the encoder 501. The decoder 504 produces decompressed data 513 (for example, reconstructed data).

[00109] The encoder 501 and decoder 504 may be entities implementing an abstraction, may be separate entities or the same entities, or may be part of the same physical device.

[00110] The analysis/processing algorithm may be any algorithm, traditional or learned from data. In the case of an algorithm which is learned from data, it is assumed that this algorithm can be modified or updated, for example, by using optimization via gradient descent. One example of the learned algorithm is a neural network.

[00111] The method and apparatus of an example embodiment may be utilized in a wide variety of systems, including systems that rely upon the compression and decompression of media data and possibly also the associated metadata. In one embodiment, however, the method and apparatus are configured to compress the media data and associated metadata streamed from a source via a content delivery network to a client device, at which point the compressed media data and associated metadata is decompressed or otherwise processed. In this regard, FIG. 6 depicts an example of such a system 600 that includes a source 602 of media data and associated metadata. The source may be, in one embodiment, a server. However, the source may be embodied in other manners if so desired. The source is configured to stream boxes containing the media data and associated metadata to the client device 604. The client device may be embodied by a media player, a multimedia system, a video system, a smart phone, a mobile telephone or other user equipment, a personal computer, a tablet computer or any other computing device configured to receive and decompress the media data and process associated metadata. In the illustrated embodiment, boxes of media data and boxes of metadata are streamed via a network 606, such as any of a wide variety of types of wireless networks and/or wireline networks. The client device is configured to receive structured information containing media, metadata and any other relevant representation of information containing the media and the metadata and to decompress the media data and process the associated metadata (e.g. for proper playback timing of decompressed media data).

[00112] An apparatus 700 is provided in accordance with an example embodiment as shown in FIG. 7. In one embodiment, the apparatus of FIG. 7 may be embodied by a source 602, such as a file writer which, in turn, may be embodied by a server, that is configured to stream a compressed representation of the media data and associated metadata. In an alternative embodiment, the apparatus may be embodied by a client device 604, such as a file reader which may be embodied, for example, by any of the various computing devices described above. In either of these embodiments and as shown in Fig. 7, the apparatus of an example embodiment includes, is associated with or is in communication with a processing circuitry 702, one or more memory devices 704, a communication interface 706, and optionally a user interface.

[00113] The processing circuitry 702 may be in communication with the memory device 704 via a bus for passing information among components of the apparatus 700. The memory device may be non-transitory and may include, for example, one or more volatile and/or non volatile memories. In other words, for example, the memory device may be an electronic storage device (e.g., a computer readable storage medium) comprising gates configured to store data (e.g., bits) that may be retrievable by a machine (e.g., a computing device like the processing circuitry). The memory device may be configured to store information, data, content, applications, instructions, or the like for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present disclosure. For example, the memory device may be configured to buffer input data for processing by the processing circuitry. Additionally or alternatively, the memory device may be configured to store instructions for execution by the processing circuitry.

[00114] The apparatus 700 may, in some embodiments, be embodied in various computing devices as described above. However, in some embodiments, the apparatus may be embodied as a chip or chip set. In other words, the apparatus may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. The apparatus may therefore, in some cases, be configured to implement an embodiment of the present disclosure on a single chip or as a single ‘system on a chip.’ As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein.

[00115] The processing circuitry 702 may be embodied in a number of different ways. For example, the processing circuitry may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special-purpose computer chip, or the like. As such, in some embodiments, the processing circuitry may include one or more processing cores configured to perform independently. A multi-core processing circuitry may enable multiprocessing within a single physical package. Additionally or alternatively, the processing circuitry may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading.

[00116] In an example embodiment, the processing circuitry 702 may be configured to execute instructions stored in the memory device 704 or otherwise accessible to the processing circuitry. Alternatively or additionally, the processing circuitry may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processing circuitry may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Thus, for example, when the processing circuitry is embodied as an ASIC, FPGA or the like, the processing circuitry may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processing circuitry is embodied as an executor of instructions, the instructions may specifically configure the processing circuitry to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processing circuitry may be a processor of a specific device (e.g., an image or video processing system) configured to employ an embodiment of the present invention by further configuration of the processing circuitry by instructions for performing the algorithms and/or operations described herein. The processing circuitry may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processing circuitry.

[00117] The communication interface 706 may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data, including video bitstreams. In this regard, the communication interface may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface may alternatively or also support wired communication. As such, for example, the communication interface may include a communication modem and/or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms.

[00118] In some embodiments, the apparatus 700 may optionally include a user interface that may, in turn, be in communication with the processing circuitry 702 to provide output to a user, such as by outputting an encoded video bitstream and, in some embodiments, to receive an indication of a user input. As such, the user interface may include a display and, in some embodiments, may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. Alternatively or additionally, the processing circuitry may comprise user interface circuitry configured to control at least some functions of one or more user interface elements such as a display and, in some embodiments, a speaker, ringer, microphone and/or the like. The processing circuitry and/or user interface circuitry comprising the processing circuitry may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processing circuitry (e.g., memory device, and/or the like). [00119] Various devices and systems described in the FIGs. 1 to 7 enable mechanisms for implementing incremental weight update compression interpretation into a neural network representation bitstream. A neural network representation (NNR) describes compression of neural networks for efficient transport. Some example high-level sytax (HLS) relevant to the technologies of weight update compression, are described by various embodiments of the present invention.

[00120] FIG. 8 illustrates example structure of a neural network representation (NNR) bitstream 802 and an NNR unit 804a, in accordance with an embodiment. An NNR bitstream may conform to compression of neural networks for multimedia content description and analysis). NNR specifies a high-level bitstream syntax (HLS) for signaling compressed neural network data in a channel as a sequence of NNR units as illustrated in FIG. 8. As depicted in FIG. 8, according to this structure, an NNR bitstream 802 includes multiple elemental units termed NNR Units (e.g. NNR units 804a, 804b, 804c, ... 804n). An NNR Unit (e.g., the NNR unit 804a) represents a basic high-level syntax structure and includes three syntax elements: an NNR Unit Size 806, an NNR unit header 808, an NNR unit payload 810.

[00121] Each NNR unit may have a type that defines the functionality of the NNR Unit and allows correct interpretation and decoding procedures to be invoked. In an embodiment, NNR units may contain different types of data. The type of data that is contained in the payload of an NNR Unit defines the NNR Unit’s type. This type is specified in the NNR unit header. The following table specifies the NNR unit header types and their identifiers.

[00122] NNR unit is data structure for carrying neural network data and related metadata which is compressed or represented using this specification. NNR units carry compressed or uncompressed information about neural network metadata, topology information, complete or partial layer data, filters, kernels, biases, quantization weights, tensors, or the like. An NNR unit may include following data elements:

NNR unit size: This data element signals the total byte size of the NNR Unit, including the NNR unit size.

NNR unit header: This data element contains information about the NNR unit type and related metadata.

NNR unit payload: This data element contains compressed or uncompressed data related to the neural network.

[00123] NNR bitstream is composed of a sequence of NNR Units and/or aggregate NNR units. The first NNR unit in an NNR bitstream shall be an NNR start unit (e.g. NNR unit of type NNR_STR). [00124] Neural Network topology information can be carried as NNR units of type NNR_TPL. Compressed NN information can be carried as NNR units of type NNR_NDU. Parameter sets can be carried as NNR units of type NNR_MPS and NNR_LPS. An NNR bitstream is formed by serializing these units.

[00125] Image and video codecs may use one or more neural networks at decoder side, either within the decoding loop or as a post-processing step, for both human-targeted and machine targeted compression.

[00126] Some of the example implementations proposed by various embodiments are described below:

[00127] NNR model parameter set unit header syntax:

[00128] NNR model parameter set unit payload syntax:

[00129] Quantization method identifiers for the case NN compression:

[00130] Some examples of data definitions and data types related to the various embodiments are described in following paragraphs:

[00131] ue(k): unsigned integer k-th order, e.g. Exp-Golomb-coded syntax element. The parsing process for this descriptor is according to the following pseudo-code with x as a result:

[00132] ie(k): signed integer k-th order, e.g. Exp-Golomb-coded syntax element. The parsing process for this descriptor is according to the following pseudo-code with x as a result:

[00133] A payload identifier may suggest the decoding method. Following table provides NNR compressed data payload types:

[00134] topologylnformation about potential changes caused by a pruning algorithm is provided in nnr_topology_unit_payload():

[00135] nnr_pruning_topology_container() is specified as follows:

[00136] bit_mask() is specified as follows:

[00137] Various embodiments propose mechanisms for introducing weight update compression interpretation into the NNR bitstream. Some example proposals include mechanisms for:

Signalling incremental weight update compression mode of operation;

Potential introduction of a weight update unit type among NNR unit types;

Signalling mechanisms required for dithering algorithms;

Signalling global random seed;

Signalling if a model is an inference friendly quantized model;

Signalling incremental weight update quantization algorithms;

Signalling federated averaging weight update algorithm;

Signalling supporting down-stream compression support;

Signalling asynchronous incremental weight update mode;

Source_id and operation_id to identify state of communicated information;

Global codebook approaches for weight update quantization;

Extension to some of the data payload types and payload;

Syntax and semantics of some quantization algorithms;

Encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; and

Syntax and semantics associated with topology change.

[00138] incremental_weight_update_flag: the incremental weight update flag is a flag that signals a decoder that the bitstream is corresponding to a weight update compression and not a weight compression. The incremental_weight_update_flag indicates to the decoder to invoke a correct decoding mechanism upon receiving the data and decode the correct payload types.

[00139] For example, when the incremental_weight_update_flag is set to value 1, it means that the NNR_QNT or NNR_NDU consist of a data specific to weight update compression and decompression algorithms. The same applies to the interpretation of other data units. [00140] Incremental_weight_update_flag may be introduced into different locations in the existing NNR vl syntax and semantics. One suggested location may be nnr_model_parameter_set_header(), for example:

[00141] In an embodiment, nnr_model_parameter_set_header() may be stored in the NNR payload data or its header.

[00142] NNR Weight Update Unit (NNR_WUU): a data unit of type NNR weight update compression data unit type may be an alternative to adapting the existing data units from NNR vl syntax, identified as NNR_WUU (NNR weight update unit). This data unit may contain information relevant to weight update strategies.

[00143] dithering_flag: to support dithering techniques in quantization, encoding and decoding pipelines, a flag, e.g., dithering_flag is introduced. For example, when dithering_flag is set to value 1 , a random seed is present that may be used for all the computations. During the decoding process the client may use the random seed to generate a random sequence which will be used during the reconstruction of the quantized values.

[00144] random_seed: a global random seed may be required for some algorithms. For example, in dithering dependent algorithms, a global random seed may used. Some embodiments propose random seed to be part of the information to be signalled.

[00145] Inference_friendly_flag: in NN compression, a model may be inference friendly, e.g., its weight and/or activations may be quantized. In weight update compression, such methods may require specific algorithmic treatment. Accordingly, some embodiments propose signalling the presence of such models in the bitstream.

[00146] quantized_weight_update_flag: indicates when the weight updates are quantized or, instead, there has been no quantization involved. Alternatively, the quantization_algorithm_id may be used to indicate that no quantization algorithm was applied to the weight updates by defining an id for such a case. [00147] quantization_algorithm_id: an algorithm identifier that is signalled for the weight update quantization. The decoder may use this information for performing a suitable dequantization operation. Example algorithms may include:

[00148] An alternative example to quantization_algorithm_id may be that when the incremental_weight_update_flag indicates a weight update compression mode, the interpretation of mps_quantization_method_flags may be according to the quantization techniques for weight update compression. In this example, the quantization method identifiers may be interpreted or complemented with the identifiers relevant to the incremental weight update compression, e.g., the mapping of quantization method identifier to the actual quantization algorithm is performed by using a difference look-up table, such as the table above.

[00149] fed_alg_id: in case of federated algorithm, an agreed federated learning algorithm id may be signalled. Example of id may include FedAVG, FedProx, and the like. Another example usage may be for indicating a specific step, such as, enabling a specific loss function during training process.

[00150] For example, the fed_alg_id may take one of the values in the following table:

[00151] elapsed_time: is a data field that communicates the time passed from the last communication between two parties, the data field may be used from a server to a client communication or from the client to the server. The elapsed_time may be used in conjunction with a flag to determine the direction of the communication or in another embodiment, two elapsed_time data fields, one for each communication directions. In another embodiment, the elapsed_time may indicate the number of rounds of communication between the server and the client, instead of the duration that passed.

[00152] server_round_ID: specifies a unique identifier for the communication round from the server to one or more clients. The value of the identifier may be derived from the value that server_round_ID had in the previous communication round from the server to one or more clients, for example, it can be incremented by 1.

[00153] client_round_ID specifies a unique identifier for the communication round from a client to a server. The identifier may be, for example, the same value that the server had previously signalled to the client, or a value which may be derived from the value that the server had previously signalled to the client (for example, an incremented value).

[00154] model_reference_ID is an ID that indicates what model may be used as a base model. The model_reference_ID may indicate a topology of the base model, or both the topology and an initialization of at least some of the weights of the base model. The training session may be performed by the client, by training the base model. Weight-updates may be derived from the weights of the base model before the training performed by the client and the weights of the base model after the training performed by the client. The model reference id may point to a URI or include a name identifier predefined and globally distributed, for example, to all participants.

[00155] weight_reference_ID specifies a unique identifier of the weights for a base model. [00156] validation set performance: In a communication from a server to a client, the validation set performance may signal to the client a performance indication, determined based on a validation set. In a communication from the client to the server, the validation set performance may include an indication of what performance level a weight-update associated to this vaIidation_set_performance may achieve, where the performance level may be determined based on a validation dataset present at client’s side. This may be informative for the server on how to use the received weight-update from that client. For example, the server may decide to multiply the received weight-updates from clients by using multiplier values derived from the vaIidation_set_performance values received from clients. This information may be available on one side of the communications or both communication ends.

[00157] Copy_cIient_wu may be used in the bitstream sent by a client to a server, for indicating to use the latest weight-update received from this client as the new weight-update. In other words, after receiving this information, the server may copy the previous weight-update received from this client and re-use it as the current weight-update from this client. The client may not need to send the actual weight-update data which may be a replica of the previous weight-update.

[00158] Copy_server_wu may be used in the bitstream sent by a server to a client, for indicating to use the latest weight-update received from the server as the new weight-update from the server. This weight-update from the server may be a weight-update, which was obtained by aggregating one or more weight-updates received from one or more clients. In some other embodiment, this syntax element may be used for indicating to use the latest weights (instead of weight-update) received from the server as the new weights from the server. The server may not need to send the actual weight-update which may be a replica of the previous weight update.

[00159] dec_update may specify an update to a decoder neural network, where the decoder neural network may be a neural network that performs one of the operations for decoding a weight-update.

[00160] prob_update may specify an update to a probability model, where the probability model may be a neural network that estimates a probability to be used by a lossless decoder (such as an arithmetic decoder) for losslessly decoding a weight-update. [00161] cache_enabled_flag may specify whether a caching mechanism is available and may be enabled to store weight updates on the server or on the client.

[00162] cache_depth may specify what is the number of cached sequences of weight updates that are stored. It may use to signal to what depth of stored data may an encoding or decoding process refer. The cache depth may be gated to save space in the bitstream, e.g., using cache_enabled_flag.

[00163] downstream_flag: This flag indicates whether downstream compression is used, where downstream refers to the communication direction from server to client(s). The server may or may not perform downstream compression depending on the configuration. This information may also be signaled at the session initialization. If downstream_flag is set to 1, the receiver of the bitstream may need to perform a decompression operation on the received bitstream.

[00164] async_flag: depending on the mode of operation the clients may work in an asynchronous mode, that is after they upload their information to the server, they continue their training procedure and apply a specific treatment to the downstream information that they get. Similarly, server may require specific steps as receiving the information from clients to treat them. In such case, the async_flag may be communicated to indicate such operation is allowed if the clients have the capacity. This may also be done at the session initialization.

[00165] unique_operation_id: unique operation id allows communication of specific information, e.g., last time that the server and client met, and if necessary, some small synchronization information. Such information may be provided as a specific unique identifier consisting of some pieces of information specifically designed for each part of the communication, e.g., a specific client identifier, server identifier, elapsed time since last communication, etc. The information is not limited to the examples provided.

[00166] source_id: the source id is similar or substantially similar to the unique_operation id, it just indicates the identity of the source of the information, the source id may indicate the server or the client, depending on the value. The source_id may be defined as a flag to be interpreted as the communication direction or as a string identifier for providing more detailed information. [00167] An example use case may be that the server may use this syntax element to correctly subtract from the global (aggregated weight update) a certain client’ s weight update. In an example, assume that a federated learning session involves two clients and a server. The server initially sends the initial model to the two clients. Each client uses its own data for training the model for a number of iterations. Each client may compute a weight-update as the difference between the weights of the model after the training iterations and the weights of the latest model received from the server. In another embodiment, the weight-update may be output by an auxiliary neural network, where the inputs to the auxiliary neural network are the weights of the model after the training iterations and the weights of the latest model received from the server. Each client communicates the weight-update or a compressed version of the weight- update, by also signaling a unique identifier of the client within the source_id syntax element. The server may compute an aggregated weight-update, for example, by averaging all or some of the weight-updates received from the clients. The aggregated weight-update may be communicated to the clients. For one or more of the clients, the server may decide to communicate a custom version of the aggregated weight-update, where the weight-update from a certain client with ID X is subtracted from the aggregated weight-update, and the resulting custom aggregated weight-update is communicated to the respective client with ID X. Thus, in this example, source_id would contain the client ID X. The information in source_id may therefore be used to communicate the correct custom aggregated weight-update to the clients. In another embodiment, the server may use the aggregated weight-update for updating the model, and subtract the weight-update of a certain client from the weights of the updated model, and the resulting custom weights of the updated model may be communicated to that client.

[00168] GlobaI_codebook: this is different than codebook-based quantization for NNR compression where the codebook is calculated and transferred with the NN. One global codebook may exist, and it is shared once with all the devices (e.g. clients and/or server) who are collaborating (sending or receiving a weight update). Such a codebook information may be shared once with all the participants in the computation process. For example, in an implementation a global_codebook() may be shared, distributed, or hosted in a remotely accessible network location.

[00169] In another embodiment, such a codebook may be further compressed by some quantization algorithm since it represents weight update approximations.

[00170] Following example describes implementations of some of the proposed embodiments:

[00171] global_codebook(): provides a shared codebook that may be defined as follows:

[00172] Number_of_elements: provides number of elements in the codebook

[00173] Codebook_value: provides a value corresponding to the codebook element

[00174] In another embodiment, the global codebook may be defined based on a compressed codebook, for example:

[00175] step_value: the quantization step for the codebook.

[00176] quantized_codebook_value: is the uniform quantized value of a floating codebook_value obtained by floor(codebook_value/step_value).

[00177] For a compressed global codebook, a codebook_value[i] = step_value* quantized_codebook_value[i] is calculated after decoding the global codebook.

[00178] wu_pred_coeffs: this syntax element may be a list of coefficients to be used for predicting a weight-update from one or more previously decoded weight-updates. This syntax element may be used for example by a server, for predicting a weight-update of a client, given one or more previously decoded weight-updates from that client and one or more previously decoded weight-updates from one or more other clients.

[00179] wu_pred_wuids: this syntax element may be a list of IDs which identify uniquely one or more previously decoded weight-updates to be used for predicting the weight-update of a client. In an alternative implementation, this syntax element may be a list of tuples, where each tuple includes a first element which is an identifier of a client and a second element which is an identifier of the weight-update of the client identified by the first element.

[00180] wu_pred_mode_id: this syntax element may indicate what algorithm or more to be used for predicting a weight-update from one or more previously decoded weight-updates. This syntax element may be used for example by a server, for predicting a weight-update of a client, given one or more previously decoded weight -updates from that client and one or more previously decoded weight-updates from one or more other clients. For example, one algorithm ID may indicate to use a linear combination of previously decoded weight-updates, where the coefficients for the linear combination may be indicated by wu_pred_coeffs and where the previously decoded weight-updates to be used for the prediction may be indicated by wu_pred_wuids .

[00181] Embodiment related to distributed data processing environments or architectures:

[00182] As an alternative to a high-level syntax-based communication, the model parameter set information may be shared via some common storage in which a unique identifier may be used to determine the correct parameters and payloads. Such identifier may include a specific hash id, a time-stamped id that may be used by a server and/or a client to determine the correct payload for orderly processing of information.

[00183] Extension to NNR compressed data payload types

[00184] In an embodiment, it is proposed to add an incremental weight update type to the type of NNR compressed data payload types, as described in following table. The incremental weight update type may invoke the necessary encoding and decoding procedures for a specific algorithm. [00185] NNR compressed data payload types

[00186] Further, it is proposed to add data structures to the compressed data unit payload. The payload in combination with the quantization algorithm may result in the proper encoding and may invoke the proper decoding mechanism for weight updates. One example implementation is described in the following table: [00187] incremental_weight_update_payload(): provides the correct data formats and invokes the necessary decoding procedures for the incremental weight update payload.

[00188] An Example implementation of the incremental weight update payload:

[00189] Incremental_weight_update_payload(): is an abstraction that may include the semantics and encoded bitstream of a specific algorithms, or may include a pointer to a decoding mechanism that need to be invoked.

[00190] As an example, a compressed payload may be implemented, as described in the following table:

[00191] In yet another embodiment, on the decoder side, incremental_weight_update_payload() may trigger a specific decoding mechanism where quantization_algorithm_id and NNR_PT_INCWU determine the decoding procedure according to the encoding procedure.

[00192] Syntax and semantics of quantization algorithms

[00193] Different algorithms may result in different payload that necessities different encoding/decoding procedures. The output of the quantization may be directly outputted as efficient bitmask representations or encoded using some proper encoding mechanism, the example of such encoding mechanisms may include, a run-length encoding, a position encoding and RLE, a relative significant bit position encoding, or a combination of encoding mechanisms e.g., golomb-based encoding of relative RLE encoding. An example syntax for various quantization algorithms may be defined as following:

[00194] Sign sgd quantization: using a sign sgd produces a bitmask indicating changes in the weight update compressions, the payload may be the following:

[00195] sign_sgd_quant_payload(): defines the payload for the signSGD qauntization, multiple implementations are possible, e.g., plane bitmask, in this example a bitmask_size may indicate the size of bitmask and the bit representation of the mask are transferred. Following may be an example implementation:

[00196] Bit_mask_size: indicates size of bitmask. The size of bitmask descriptor may be gated by some flag to allow variable length bitmask sizes. [00197] Bit_mask_values: represents an array of bit values in the bitmask.

[00198] scaled_binary_quant_payload(): represents the semantics for scaled binary quantization of weight updates. In this method each weight update may be represented by a nonzero mean of values in the strongest direction (positive or negative). Accordingly, a mean value and a bitmask indicating the non-zero values may be transferred.

[00199] The bitmask may be further encoded using some suitable compression mechanism such as RLE, golomb, golomb rice, position encoding or a combination of the techniques.

[00200] single_scale_ternary_quant_payload (): A single scale ternary quantization algorithm produces one scale value that reflects the amount of weight update and a mask that indicates the direction of change, which may be positive, negative or no change. The semantics for single scale ternary quantization of weight updates may be a bitmask and a mean value. In an example approach, both positive and negative directions, zero locations, and one mean of non-zeros for both directions may be encoded. The example is described in the table below, where two bits are used to indicate direction.

[00201] double_scale_ternary_quant_payload(): A double scale ternary quantization is an algorithm that produces scale values in both positive and negative direction. In other words, we communicate two mean_values. For such a method the payload may be similar or substantially similar to singIe_scaIe_ternary_quant_payIoad()’ but two mean values are communicated. Following may be an example implementation:

[00202] Alternatively, instead of a 2-bit bitmask_value, the syntax may include the following: for(i=0; i < num_weight_updates; i++) { bitmask_nonzero_flag[i] u(l) if (bitmask_nonzero_flag [i] ) bitmask_sign_flag[i] u(l)

}

[00203] The semantics may be specified as follows:

[00204] global_codebook_quant_payload(): the global codebook quantization mode allows signalling an index corresponding to the values of a partition. In this approach a list of indexes is communicated. The possible design may include following items: number_of_indices: the total number of indices

Iist_of_indexes: the indexes to the codebook elements of the quantization codebook

[00205] In an embodiment, such a global codebook may operate on a chunk of data rather than each weight update element. An example design for a channel-wise partition with maximum 2048 channels and a codebook of size 256 may be as following:

[00206] The global codebook may be further compressed using an entropy coding approach to gain further compression.

[00207] In yet another embodiment, as the number of partitions in each neural network may be different, the descriptor size may be gated to dynamically adjust the size of the codebook payload. The same may apply to the descriptor size of the list of indexes.

[00208] In another embodiment, for ternary family of quantization techniques, two bitmasks may be encoded instead of a two-bit bitmask.

[00209] In another embodiment, for the family of scaled quantization techniques, the scales may be further compressed using some other quantization technique, e.g., a uniform quantization with the scale step agreed only once. This further allows reducing the number of bits for representing the scales. Other quantization techniques are possible, e.g., when multiple scales exist for one tensor or in an aggregated mode where all the scales of all the tensors are put together. In another embodiment, only an update to the scale(s) is signalled, such as the difference between the previously-signaled scale(s) and the current scale(s).

[00210] Encoding / Decoding of bitmasks

[00211] In an embodiment, a portion of the quantization algorithms for weight update compression, for example, an essential portion may be signalled as bitmask. To further compress the bitmasks, encoding may be performed on bitmasks. In the case of weight-update compression, the bitmasks may be representing binary or ternary representations depending on the quantization algorithm. Such bitmasks may be encoded in several ways to further obtain compressibility. A proper encoding and decoding mechanism may be invoked at the encoder and decoder to interpret the bitmask. Some possibilities may include:

[00212] Run-Length encoding: in some example cases, the bitmasks may be highly sparse, in such examples, run-length encoding variants may be applied to further compress the bitmasks. For example, the following table depicts a run_Ien encoded payload for a bitmask:

[00213] In yet another embodiment, an average length of the runs may be estimated, and this may be used to determine the number of bits for run_size using Iog2(average_run_Iegnth), where Iog2 is the logarithm in basis 2. In this embodiment, a length of the descriptor may be signalled or a bit width of the run_size and run_Iegnth descriptors may be adjusted by using a gating mechanism. [00214] In the decoder side, the run-length encoded data may be parsed and decoded according to the encoding convention to populate a decompressed bitmask.

[00215] Alternatively, a count of consecutive zeros between each pair of bits equal to 1 may be coded, by using the following example syntax: rle_encoded_data() Descriptor size_run_flag u(l) count_runs u(7+size_rung_lag*8) for (i=0; i < count_runs; i++) { run_length[i] u(7+size_rung_lag*8)

}

[00216] run_length: represents the number of times the value of 0 is repeated before the next value of 1.

[00217] Position/length-encoding: The bitmasks may be further compressed by signalling the length between 0s or Is. In such an example, a bit mask may be converted to a list of integers indicating the location Is or 0s depending on which number is more populated in the bitmask. This may be similar to run-length but since there is only two run_vlaues, a chosen convention may be signalled once.

[00218] run_convention: may signal whether the length-encoding is signalling the number of zeros between ones or the number of ones between zeros.

[00219] The length encoded stream may be further compressed either using entropy coding, e.g., CABAC-based approaches or some other mechanism, e.g., golomb encoding.

[00220] GOLOMB encoded payload

[00221] A bitmask may be encoded using Golomb encoding. Following table provides an example of the semantics of the payload:

[00222] The length of the descriptors is provided as an example and longer or shorter length may be used.

[00223] encoded_stream_size: indicates the total number of bits representing a bitmask after being encoded using Golomb encoding.

[00224] golomb_encoded_bit: indicates the bit value of the encoded bitmask.

[00225] Encoding of golomb encoded data: The operation of obtaining a golomb encoded data stream may need agreement on a convention. For example, during encoding, by adopting an exp-golomb encoding, the process may be defined as processing each byte of the bitmask as an integer and encode it using the ue(k) definition of NNR spec text as unsigned integer k-th order exp-golomb to generated the golomb_encoded stream.

[00226] Decoding of golomb encoded data: is used when the decoding procedure is invoked to reconstruct the original bitmask. For example, each byte of the stream is first decoded using ue(k) decoding procedure and the bytes are put together in order to reconstruct the original bitmask. For a one byte definition k = 8, other k values may be possible. In yet another embodiment, other golomb-based encoding procedures may applied, e.g., RFE- golomb, golomb-rice, and the like.

[00227] The golomb encoded bitstream may be complemented with some extra bits, e.g., one bit to indicate the sign of the mean value, when extra information is required.

[00228] The golomb encoding, e.g., the exponential, may apply to a position encoded bitmasks or other type of payloads obtained from a quantization scheme.

[00229] Topology element referencing

[00230] While referencing topology elements, unique identifiers may be used. These unique identifiers may be indexes that map to a list of topology elements. In order to signal such elements, a new topology payload identifier may be used. As an example, NNR_TPL_REFLIST may be used as a name of such an identifier that maps to a topology storage format value in the NNR topology payload unit or header. It should be noted that in the examples described below, descriptor types are given as examples, and any fixed length or variable length data type may be utilized.

[00231] nnr_topology_unit_payload may be extended as follows:

[00232] In another embodiment, topology_data may be used together with the topology_elements_ids_list(0), rather than being mutually exclusive.

[00233] topology_elements_ids_list (flag) may store the topology elements or topology element indexes. Flag value may set the mode of operation. For example, if the flag is 0, unique topology element identifiers may be listed. When the flag is 1 , unique indexes of the topology elements which are stored in the payload with the type NNR_TPF_REFFIST may be listed. Each index may indicate the order or presence of the topology element in the indicated topology payload.

[00234] topology_elem_id_index_list may specify a list of unique indexes related to the topology elements listed in topology information with payload type NNR_TPL_REFLIST. The first element in the topology may have the index value of 0.

[00235] Selection of the mode of topology element referencing may be signaled in the NNR model parameter set, with a flag. Suh a flag may be named as mps_topology_indexed_reference_flag and the following syntax elements may be included in the NNR model parameter set:

[00236] mps_topology_indexed_reference_flag may specify whether topology elements are referenced by unique index. When set to 1 , topology elements may be represented by their indexes in the topology data defined by the topology payload of type NNR_PTL_REFLIST. This flag may be set to 0 when topology information is obtained via topology_data syntx element of NNR topology unit.

[00237] In order to store topology element identifiers, NNR compressed data unit header syntax may be extended as follows:

[00238] topology_elem_id_index may specify a unique index value of a topology element which is signaled in topology information of payload type NNR_TPL_REFLIST. The first index may be 0 (e.g. 0-indexed).

[00239] element_id_index may specify a unique index that is used to reference a topology element.

[00240] Nnr_pruning_topology_container() may be extened to support index based topology element referencing as follows:

[00241] element_id_index may specify a unique index that is used to reference a topology element.

[00242] Any topology element referencing can be done either as a unique id or an index referencing.

[00243] Signalling topology changes

[00244] Topology_element_id: is a unique identifier that may define an element of topology. The naming of the topology_element_id may include an execution order to determine the relation of one topology_element_id to other topology_element_ids. [00245] Execution order: each element in topology element may include an order of execution that allows the execution and inference of the NN inference. The execution order may be gated to allow a pre-determined sequence of executions, e.g., a plane feed-forward execution.

[00246] Execution_list: may contain a list of topology_element_id to be executed as a sequence after each other.

[00247] The existing nnr_prune_topology_container() explained in may be used to signal the changes in topology caused by a pruning algorithm for NNR compression. In this example, topology changes due to the change in a task or during weight update compression may be required to be signaled.

[00248] In one embodiment, once the incremental_weight_update_flag is set to a value indicating weight update mode of operation the same nnr_prune_topology_container() approach may be used to signal the changes in the topology.

[00249] prune_strucutre: may signal the information about the type of a structure that may be pruned or neglected during information encoding, the prune structure may refer to a layer, a channel in convolution layer, a row, a column, or a specific block pattern in a matrix. This information may be gated when there is only one type of structure to ignore, which often, may be agreed by using only one encoding/decoding convention.

[00250] ignore_strucutre: may signal whether a specific structure is pruned or dropped, e.g., a layer. For example, having ignore_structure value 1 means a layer is not encoded in the bitstream or a specific block patter is not encoded in the bitstream.

[00251] Encoding information with regard to prune_structure and ignore_structure: at the beginning of the encoding some piece of information about the prune_structure is signalled, when the specific structure meets a specific condition, e.g., all the weight values or weight update values of a layer are zero. Then the ignore_strucutre may be sent at the beginning of each pattern to mention the specific structure is ignored or included.

[00252] decoding and reconstruction: after decoding the reconstruction uses the prune_strucutre and ignore_strucutre to reconstruct the original data. [00253] In an alternative embodiment, a specific mechanism that requires a new topology container is proposed.

[00254] NNR_TPL_WUPD: NNR topology weight update may be defined as a topology storage format to indicate a topology update associated with a weight update.

[00255] topologyNecessary payload and decoding procedures may be invoked, when the NNR_TPL_WUPD payload is present in the nnr_topology_unit_payload. The payload corresponding to the NNR_TPL_WUPD may include: num_element_ids: represents a number of elements for which a topology modification is signaled. element_ids: represents an array of identifiers, where each identifier corresponds to a specific element that may be modified in the topology in consequence of a topology modification. weigt_tensor_dimension: is a list of lists, where each internal list is a list of new dimensions of the weight vector corresponding to the respective element id in element_ids.

- reorganize_flag: is a flag to indicate when the existing weight vector may be reorganized according to the new dimensions or a corresponding weight vector may be provided via some NNR data payload. The payload may contain a mapping to indicate how a new weight tensor is obtained from the existing weight tensor, when the reorganize flag signals a reorganization. weight_mapping: it is a mapping that indicates how an existing weight is mapped to a new topology element in consequence of dimension changes of the element. Such mapping may be a bitmask with specific processing order to indicate which weight are kept at which locations in the new weight tensor. For example, by using a row major matrix processing. The bitmask may be further compressed with some of the previously mentioned bitmask encoding schemes. topology_compressed: is used to indicate that the information associated with topology update may be compressed or follows a specific encoding and decoding procedure to be invoked to decode the topology information.

[00256] FIG. 9 is an example apparatus 900, which may be implemented in hardware, configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set, based on the examples described herein. Some example of the apparatus 900 include, but are not limited to, apparatus 50, client device 604, and apparatus 700. The apparatus 900 comprises a processor 902, at least one non-transitory memory 904 including computer program code 905, wherein the at least one memory 904 and the computer program code 905 are configured to, with the at least one processor 902, cause the apparatus 900 to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set 906, based on the examples described herein.

[00257] The apparatus 900 optionally includes a display 908 that may be used to display content during rendering. The apparatus 900 optionally includes one or more network (NW) interfaces (I/F(s)) 180. The NW I/F(s) 910 may be wired and/or wireless and communicate over the Internet/other network(s) via any communication technique. The NW I/F(s) 910 may comprise one or more transmitters and one or more receivers. The N W I/F(s) 910 may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de)modulator, and encoder/decoder circuitry(ies) and one or more antennas.

[00258] The apparatus 900 may be a remote, virtual or cloud apparatus. The apparatus 900 may be either a coder or a decoder, or both a coder and a decoder. The at least one memory 904 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory, and removable memory. The at least one memory 904 may comprise a database for storing data. The apparatus 900 need not comprise each of the features mentioned, or may comprise other features as well. The apparatus 900 may correspond to or be another embodiment of the apparatus 50 shown in FIG. 1 and FIG. 2, or any of the apparatuses shown in FIG. 3. The apparatus 900 may correspond to or be another embodiment of the apparatuses shown in FIG. 12, including UE 80, RAN node 170, or network element(s) 190.

[00259] FIG. 10 is an example method 1000 for introducing a weight update compression interpretation into the NNR bitstream, in accordance with an embodiment. As shown in block 906 of FIG. 9, the apparatus 900 includes means, such as the processing circuitry 902 or the like, for implementing mechanisms for introducing a weight update compression interpretation into the NNR bitstream. At 1002, the method 1000 includes encoding or decoding a high-level bitstream syntax for at least one neural network. At 1004, the method 1000 includes, wherein wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network. At 1006, the method 1000 includes, wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units. At 1008, the method 1000 includes, wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[00260] In an embodiment, the one or more mechanisms may include at least one of a mechanism to signal an incremental weight update compression mode of operation, a mechanism to introduce a weight update unit type among the at least one information unit, a mechanism to signal mechanisms required for dithering algorithms, a mechanism to signal a global random seed, a mechanism to signal whether a model comprises an inference friendly quantized model, a mechanism to signal incremental weight update quantization algorithms, a mechanism to signal federated averaging weight update algorithm, a mechanism to signal supporting down-stream compression support, a mechanism to signal an asynchronous incremental weight update mode, a mechanism to identify a source of information, a mechanism to identify an operation, a mechanism to define global codebook approaches for a weight update quantization, a mechanism to define extension to one or more data payload types, a mechanism to define extension to a payload, a mechanism to define a syntax and semantics of one or more quantization algorithms, a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs, or a mechanism to identify a syntax and semantics relevant to a topology change.

[00261] FIG. 11 is an example method 1100 for defining a validation set performance, in accordance with an embodiment. As shown in block 906 of FIG. 9, the apparatus 900 includes means, such as the processing circuitry 902 or the like, for a validation set performance. At 1102, the method 1100 includes define a validation set performance wherein the validation set performance comprises or specifies one or more of the following. At 1104, the method 1100 includes, wherein the validation set performance includes a performance indication determined based on a validation set. Additionally or alternatively, at 1106, the method 1100 includes, wherein the validation set performance includes indication of a performance level achieved by a weight-update associated with the validation set performance.

[00262] In an embodiment, the validation set performance provides information on how to use the weight-update received from a device. In an example, the weight-updates are multiplied by multiplier values derived from the validation set performance values received from the device.

[00263] In an embodiment, the method 1100 may also include defining a weight reference ID, where the weight reference ID uniquely identifies weights for a base model. [00264] In an embodiment, the method 1100 may also include defining a source ID, where the source ID uniquely identifies a source of information.

[00265] Turning to FIG. 12, this figure shows a block diagram of one possible and non limiting example in which the examples may be practiced. A user equipment (UE) 110, radio access network (RAN) node 170, and network element(s) 190 are illustrated. In the example of FIG. 1, the user equipment (UE) 110 is in wireless communication with a wireless network 100. A UE is a wireless device that can access the wireless network 100. The UE 110 includes one or more processors 120, one or more memories 125, and one or more transceivers 130 interconnected through one or more buses 127. Each of the one or more transceivers 130 includes a receiver, Rx, 132 and a transmitter, Tx, 133. The one or more buses 127 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, and the like. The one or more transceivers 130 are connected to one or more antennas 128. The one or more memories 125 include computer program code 123. The UE 110 includes a module 140, comprising one of or both parts 140-1 and/or 140-2, which may be implemented in a number of ways. The module 140 may be implemented in hardware as module 140-1, such as being implemented as part of the one or more processors 120. The module 140-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array. In another example, the module 140 may be implemented as module 140-2, which is implemented as computer program code 123 and is executed by the one or more processors 120. For instance, the one or more memories 125 and the computer program code 123 may be configured to, with the one or more processors 120, cause the user equipment 110 to perform one or more of the operations as described herein. The UE 110 communicates with RAN node 170 via a wireless link 111.

[00266] The RAN node 170 in this example is a base station that provides access by wireless devices such as the UE 110 to the wireless network 100. The RAN node 170 may be, for example, a base station for 5G, also called New Radio (NR). In 5G, the RAN node 170 may be a NG-RAN node, which is defined as either a gNB or an ng-eNB. A gNB is a node providing NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface to a 5GC (such as, for example, the network element(s) 190). The ng-eNB is a node providing E-UTRA user plane and control plane protocol terminations towards the UE, and connected via the NG interface to the 5GC. The NG-RAN node may include multiple gNBs, which may also include a central unit (CU) (gNB-CU) 196 and distributed unit(s) (DUs) (gNB-DUs), of which DU 195 is shown. Note that the DU may include or be coupled to and control a radio unit (RU). The gNB-CU is a logical node hosting radio resource control (RRC), SDAP and PDCP protocols of the gNB or RRC and PDCP protocols of the en-gNB that controls the operation of one or more gNB-DUs. The gNB-CU terminates the FI interface connected with the gNB-DU. The FI interface is illustrated as reference 198, although reference 198 also illustrates a link between remote elements of the RAN node 170 and centralized elements of the RAN node 170, such as between the gNB-CU 196 and the gNB-DU 195. The gNB-DU is a logical node hosting RLC, MAC and PHY layers of the gNB or en-gNB, and its operation is partly controlled by gNB-CU. One gNB-CU supports one or multiple cells. One cell is supported by only one gNB-DU. The gNB-DU terminates the FI interface 198 connected with the gNB-CU. Note that the DU 195 is considered to include the transceiver 160, for example, as part of a RU, but some examples of this may have the transceiver 160 as part of a separate RU, for example, under control of and connected to the DU 195. The RAN node 170 may also be an eNB (evolved NodeB) base station, for LTE (long term evolution), or any other suitable base station or node.

[00267] The RAN node 170 includes one or more processors 152, one or more memories 155, one or more network interfaces (N/W I/F(s)) 161, and one or more transceivers 160 interconnected through one or more buses 157. Each of the one or more transceivers 160 includes a receiver, Rx, 162 and a transmitter, Tx, 163. The one or more transceivers 160 are connected to one or more antennas 158. The one or more memories 155 include computer program code 153. The CU 196 may include the processor(s) 152, memories 155, and network interfaces 161. Note that the DU 195 may also contain its own memory/memories and processor(s), and/or other hardware, but these are not shown.

[00268] The RAN node 170 includes a module 150, comprising one of or both parts 150- 1 and/or 150-2, which may be implemented in a number of ways. The module 150 may be implemented in hardware as module 150-1, such as being implemented as part of the one or more processors 152. The module 150-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array. In another example, the module 150 may be implemented as module 150-2, which is implemented as computer program code 153 and is executed by the one or more processors 152. For instance, the one or more memories 155 and the computer program code 153 are configured to, with the one or more processors 152, cause the RAN node 170 to perform one or more of the operations as described herein. Note that the functionality of the module 150 may be distributed, such as being distributed between the DU 195 and the CU 196, or be implemented solely in the DU 195. [00269] The one or more network interfaces 161 communicate over a network such as via the links 176 and 131. Two or more gNBs 170 may communicate using, for example, link 176. The link 176 may be wired or wireless or both and may implement, for example, an Xn interface for 5G, an X2 interface for LTE, or other suitable interface for other standards.

[00270] The one or more buses 157 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, wireless channels, and the like. For example, the one or more transceivers 160 may be implemented as a remote radio head (RRH) 195 for LTE or a distributed unit (DU) 195 for gNB implementation for 5G, with the other elements of the RAN node 170 possibly being physically in a different location from the RRH/DU, and the one or more buses 157 may be implemented in part as, for example, fiber optic cable or other suitable network connection to connect the other elements (for example, a central unit (CU), gNB-CU) of the RAN node 170 to the RRH/DU 195. Reference 198 also indicates those suitable network link(s).

[00271] It is noted that description herein indicates that ‘cells’ perform functions, but it should be clear that equipment which forms the cell may perform the functions. The cell makes up part of a base station. That is, there can be multiple cells per base station. For example, there may be three cells for a single carrier frequency and associated bandwidth, each cell covering one-third of a 360 degree area so that the single base station’s coverage area covers an approximate oval or circle. Furthermore, each cell can correspond to a single carrier and a base station may use multiple carriers. So if there are three 120 degree cells per carrier and two carriers, then the base station has a total of 6 cells.

[00272] The wireless network 100 may include a network element or elements 190 that may include core network functionality, and which provides connectivity via a link or links 181 with a further network, such as a telephone network and/or a data communications network (for example, the Internet). Such core network functionality for 5G may include access and mobility management function(s) (AMF(S)) and/or user plane functions (UPF(s)) and/or session management function(s) (SMF(s)). Such core network functionality for LTE may include MME (Mobility Management Entity )/SGW (Serving Gateway) functionality. These are merely example functions that may be supported by the network element(s) 190, and note that both 5G and LTE functions might be supported. The RAN node 170 is coupled via a link 131 to the network element 190. The link 131 may be implemented as, for example, an NG interface for 5G, or an SI interface for LTE, or other suitable interface for other standards. The network element 190 includes one or more processors 175, one or more memories 171, and one or more network interfaces (N/W I/F(s)) 180, interconnected through one or more buses 185. The one or more memories 171 include computer program code 173. The one or more memories 171 and the computer program code 173 are configured to, with the one or more processors 175, cause the network element 190 to perform one or more operations.

[00273] The wireless network 100 may implement network virtualization, which is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to software containers on a single system. Note that the virtualized entities that result from the network virtualization are still implemented, at some level, using hardware such as processors 152 or 175 and memories 155 and 171, and also such virtualized entities create technical effects.

[00274] The computer readable memories 125, 155, and 171 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The computer readable memories 125, 155, and 171 may be means for performing storage functions. The processors 120, 152, and 175 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multi-core processor architecture, as non-limiting examples. The processors 120, 152, and 175 may be means for performing functions, such as controlling the UE 110, RAN node 170, network element(s) 190, and other functions as described herein.

[00275] In general, the various embodiments of the user equipment 110 can include, but are not limited to, cellular telephones such as smart phones, tablets, personal digital assistants (PDAs) having wireless communication capabilities, portable computers having wireless communication capabilities, image capture devices such as digital cameras having wireless communication capabilities, gaming devices having wireless communication capabilities, music storage and playback appliances having wireless communication capabilities, Internet appliances permitting wireless Internet access and browsing, tablets with wireless communication capabilities, as well as portable units or terminals that incorporate combinations of such functions.

[00276] One or more of modules 140-1, 140-2, 150-1, and 150-2 may be configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set, based on the examples described herein. Computer program code 173 may also be configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set, based on the examples described herein.

[00277] As described above, FIGs. 10 and 11 include a flowcharts of an apparatus (e.g. 50, 602, 604, 700, or 900), method, and computer program product according to certain example embodiments. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory (e.g. 58, 125, 704, or 904) of an apparatus employing an embodiment of the present invention and executed by processing circuitry (e.g. 56, 120, 702 or 902) of the apparatus. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flowchart blocks. These computer program instructions may also be stored in a computer-readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture, the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks.

[00278] A computer program product is therefore defined in those instances in which the computer program instructions, such as computer-readable program code portions, are stored by at least one non-transitory computer-readable storage medium with the computer program instructions, such as the computer-readable program code portions, being configured, upon execution, to perform the functions described above, such as in conjunction with the flowchart(s) of FIGs. 10 and 11. In other embodiments, the computer program instructions, such as the computer-readable program code portions, need not be stored or otherwise embodied by a non-transitory computer-readable storage medium, but may, instead, be embodied by a transitory medium with the computer program instructions, such as the computer-readable program code portions, still being configured, upon execution, to perform the functions described above.

[00279] Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.

[00280] In some embodiments, certain ones of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, additions, or amplifications to the operations above may be performed in any order and in any combination.

[00281] In the above, some embodiments have been described in relation to a particular type of a parameter set (namely adaptation parameter set). It needs to be understood, however, that embodiments may be realized with any type of parameter set or other syntax structure in the bitstream.

[00282] In the above, some example embodiments have been described with the help of syntax of the bitstream. It needs to be understood, however, that the corresponding structure and/or computer program may reside at the encoder for generating the bitstream and/or at the decoder for decoding the bitstream.

[00283] In the above, where example embodiments have been described with reference to an encoder, it needs to be understood that the resulting bitstream and the decoder have corresponding elements in them. Likewise, where example embodiments have been described with reference to a decoder, it needs to be understood that the encoder has structure and/or computer program for generating the bitstream to be decoded by the decoder. [00284] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

[00285] It should be understood that the foregoing description is only illustrative. Various alternatives and modifications may be devised by those skilled in the art. For example, features recited in the various dependent claims may be combined with each other in any suitable combination(s). In addition, features from different embodiments described above may be selectively combined into a new embodiment. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims.

SYNTAX AND SEMANTICS FOR WEIGHT UPDATE COMPRESSION OF

NEURAL NETWORKS

TECHNICAL FIELD

[0001] The examples and non-limiting embodiments relate generally to multimedia transport and neural networks, and more particularly, to syntax and semantics for incremental weight update compression of neural networks.

BACKGROUND

[0002] It is known to provide standardized formats for exchange of neural networks.

SUMMARY

[0003] An example apparatus includes at least one processor; and at least one non- transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: encode or decode a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; and wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[0004] The example apparatus may further include, wherein the one or more mechanisms comprise at least one of: a mechanism to signal an incremental weight update compression mode of operation; a mechanism to introduce a weight update unit type among the at least one information unit; a mechanism to signal mechanisms required for dithering algorithms; a mechanism to signal a global random seed; a mechanism to signal whether a model comprises an inference friendly quantized model; a mechanism to signal incremental weight update quantization algorithms; a mechanism to signal federated averaging weight update algorithm; a mechanism to signal supporting down-stream compression support; a mechanism to signal an asynchronous incremental weight update mode; a mechanism to identify a source of information; a mechanism to identify an operation; a mechanism to define global codebook approaches for a weight update quantization; a mechanism to define extension to one or more data payload types; a mechanism to define extension to a payload; a mechanism to define a syntax and semantics of one or more quantization algorithms; a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; or a mechanism to identify a syntax and semantics relevant to a topology change.

[0005] The example apparatus may further include, wherein the mechanism to signal the incremental weight update compression mode of operation comprises an incremental weight update flag to signal or indicates to a decoder that the NNR bitstream is associated with or corresponds to a weight update compression and not a weight compression.

[0006] The example apparatus may further include, wherein the incremental weight update flag further signals or indicates to the decoder to invoke an associated decoding mechanism upon receiving a data and decode an associated payload types.

[0007] The example apparatus may further include, wherein the mechanism to introduce the weight update unit type among the at least one information unit comprises a weight update compression data unit type comprising information associated with weight update strategies.

[0008] The example apparatus may further include, wherein the at least one information unit comprises at least one NNR unit type.

[0009] The example apparatus may further include, wherein the mechanism to signal dithering algorithms comprises a dithering flag to support dithering techniques in quantization and encoding pipelines.

[0010] The example apparatus may further include, wherein the one or more information unit comprises a global random seed used for encoding and decoding computation, when the dithering flag is set.

[0011] The example apparatus may further include, wherein the mechanism to signal a global random seed comprises a random seed flag, comprising a global random seed, to be a part of the one or more information unit.

[0012] The example apparatus may further include, wherein the mechanism to signal whether a model comprises an inference friendly quantized model comprises an inference friendly flag.

[0013] The example apparatus may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantized weight update flag to indicate whether the weight updates are quantized or not. [0014] The example apparatus may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantization algorithm identity to indicate that no quantization algorithm was applied to the weight updates.

[0015] The example apparatus may further include, wherein the mechanism to the signal federated averaging weight update algorithm comprises signaling a predetermined federated algorithm identity.

[0016] The example apparatus may further include, wherein the mechanism to signal supporting down-stream compression support comprises downstream flag to indicate whether a downstream compression is used, and wherein the downstream refers to the communication direction from a server to one or more client devices.

[0017] The example apparatus may further include, wherein the mechanism to signal an asynchronous incremental weight update mode comprises an asynchronous flag to indicate whether a client device is permitted to perform an asynchronous operation, based on the capabilities of the client device.

[0018] The example apparatus may further include, wherein the mechanism to a identify the source of information comprises a source identity, wherein the source comprises at least one of a client device or a server.

[0019] The example apparatus may further include, wherein the mechanism identify an operation identity comprises used for communication of a specific information.

[0020] The example apparatus may further include, wherein the mechanism to define the extension to the one or more data payload types comprises adding an incremental weight update type to a compressed payload data types.

[0021] The example apparatus may further include, wherein the mechanism to define the extension to the payload comprises defining an incremental weight update payload comprising semantics and encoded bitstream of predetermined algorithm.

[0022] The example apparatus may further include, wherein the mechanism to define the syntax and semantics of one or more quantization algorithms comprises using a sign stochastic gradient descent (sgd) quantization to generate a bitmask indicating changes in the weight update compression.

[0023] The example apparatus may further include, wherein a payload for the sign sgd quantization comprises a sign sgd quantization payload.

[0024] The example apparatus may further include, wherein the mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs comprises a run-length encoding or decoding mechanism, a position or length encoding or decoding mechanism, or a golomb encoding/decoding mechanism.

[0025] The example apparatus may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises using a topology container to signal changes in a topology, when an incremental weight update flag is set.

[0026] The example apparatus may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises a topology weight update container for storing a topology format to indicate a topology update associated with a weight update.

[0027] The example apparatus may further include, wherein a required payload and decoding procedures are invoked when the topology weight update container is present in a topology unit payload.

[0028] The example apparatus may further include, wherein a required payload comprises one or more of: a number element identity comprising a number of elements for which a topology modification is signaled; an element identity comprising an array of identifiers, wherein each identifier is associated with an element that is modified due the topology update; a weight tensor dimension comprising a list comprising one or more lists, wherein each list of the one or more list comprises updated dimensions of a weight vector associated with the element identity; a reorganize flag to indicate whether an existing weight vector is reorganized according to the updated dimensions or an associated weight vector, wherein when the reorganize flag signals a reorganization the payload contains a mapping to indicate how an updated weight tensor is obtained from an existing weight tensor; a weight mapping indicates how an existing weight is mapped to an updated topology element; or topology compressed is used to indicate whether information associated with the topology update is capable of being compressed or follows a specific encoding and decoding procedure to be invoked in order to decode the topology information. [0029] An example method includes encoding or decoding a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[0030] The example method may further include, wherein the one or more mechanisms comprise at least one of: a mechanism to signal an incremental weight update compression mode of operation; a mechanism to introduce a weight update unit type among the at least one information unit; a mechanism to signal mechanisms required for dithering algorithms; a mechanism to signal a global random seed; a mechanism to signal whether a model comprises an inference friendly quantized model; a mechanism to signal incremental weight update quantization algorithms; a mechanism to signal federated averaging weight update algorithm; a mechanism to signal supporting down-stream compression support; a mechanism to signal an asynchronous incremental weight update mode; a mechanism to identify a source of information; a mechanism to identify an operation; a mechanism to define global codebook approaches for a weight update quantization; a mechanism to define extension to one or more data payload types; a mechanism to define extension to a payload; a mechanism to define a syntax and semantics of one or more quantization algorithms; a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; or a mechanism to identify a syntax and semantics relevant to a topology change.

[0031] The example method may further include, wherein the mechanism to signal the incremental weight update compression mode of operation comprises an incremental weight update flag to signal or indicates to a decoder that the NNR bitstream is associated with or corresponds to a weight update compression and not a weight compression.

[0032] The example method may further include, wherein the incremental weight update flag further signals or indicates to the decoder to invoke an associated decoding mechanism upon receiving a data and decode an associated payload types.

[0033] The example method may further include, wherein the mechanism to introduce the weight update unit type among the at least one information unit comprises a weight update compression data unit type comprising information associated with weight update strategies. [0034] The example method may further include, wherein the at least one information unit includes at least one NNR unit type.

[0035] The example method may further include, wherein the mechanism to signal dithering algorithms comprises a dithering flag to support dithering techniques in quantization and encoding pipelines.

[0036] The example method may further include, wherein the one or more information unit comprises a global random seed used for encoding and decoding computation, when the dithering flag is set.

[0037] The example method may further include, wherein the mechanism to signal a global random seed comprises a random seed flag, comprising a global random seed, to be a part of the one or more information unit.

[0038] The example method may further include, wherein the mechanism to signal whether a model comprises an inference friendly quantized model comprises an inference friendly flag.

[0039] The example method may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantized weight update flag to indicate whether the weight updates are quantized or not.

[0040] The example method may further include, wherein the mechanism to signal incremental weight update quantization algorithms comprises a quantization algorithm identity to indicate that no quantization algorithm was applied to the weight updates.

[0041] The example method may further include, wherein the mechanism to the signal federated averaging weight update algorithm comprises signaling a predetermined federated algorithm identity.

[0042] The example method may further include, wherein the mechanism to signal supporting down-stream compression support comprises downstream flag to indicate whether a downstream compression is used, and wherein the downstream refers to the communication direction from a server to one or more client devices. [0043] The example method may further include, wherein the mechanism to signal an asynchronous incremental weight update mode comprises an asynchronous flag to indicate whether a client device is permitted to perform an asynchronous operation, based on the capabilities of the client device.

[0044] The example method may further include, wherein the mechanism to a identify the source of information comprises a source identity, wherein the source comprises at least one of a client device or a server.

[0045] The example method may further include, wherein the mechanism identify an operation identity comprises used for communication of a specific information.

[0046] The example method may further include, wherein the mechanism to define the extension to the one or more data payload types comprises adding an incremental weight update type to a compressed payload data types.

[0047] The example method may further include, wherein the mechanism to define the extension to the payload comprises defining an incremental weight update payload comprising semantics and encoded bitstream of predetermined algorithm.

[0048] The example method may further include, wherein the mechanism to define the syntax and semantics of one or more quantization algorithms comprises using a sign stochastic gradient descent (sgd) quantization to generate a bitmask indicating changes in the weight update compression.

[0049] The example method may further include, wherein a payload for the sign sgd quantization comprises a sign sgd quantization payload.

[0050] The example method may further include, wherein the mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs comprises a run-length encoding or decoding mechanism, a position or length encoding or decoding mechanism, or a golomb encoding/decoding mechanism.

[0051] The example method may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises using a topology container to signal changes in a topology, when an incremental weight update flag is set. [0052] The example method may further include, wherein the mechanism to a mechanism to identify a syntax and semantics associated with topology change comprises a topology weight update container for storing a topology format to indicate a topology update associated with a weight update.

[0053] The example method may further include, wherein a required payload and decoding procedures are invoked when the topology weight update container is present in a topology unit payload.

[0054] The example method may further include, wherein a required payload comprises one or more of: a number element identity comprising a number of elements for which a topology modification is signaled; an element identity comprising an array of identifiers, wherein each identifier is associated with an element that is modified due the topology update; a weight tensor dimension comprising a list comprising one or more lists, wherein each list of the one or more list comprises updated dimensions of a weight vector associated with the element identity; a reorganize flag to indicate whether an existing weight vector is reorganized according to the updated dimensions or an associated weight vector, wherein when the reorganize flag signals a reorganization the payload contains a mapping to indicate how an updated weight tensor is obtained from an existing weight tensor; a weight mapping indicates how an existing weight is mapped to an updated topology element; or topology compressed is used to indicate whether information associated with the topology update is capable of being compressed or follows a specific encoding and decoding procedure to be invoked in order to decode the topology information.

[0055] An example computer readable medium includes program instructions for causing an apparatus to perform at least the following: encoding or decoding a high-level bitstream syntax for at least one neural network; wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network; wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units; and wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[0056] The example computer readable medium may further include, wherein the computer readable medium comprises a non-transitory computer readable medium. [0057] The example computer readable medium may further include, wherein the computer readable medium further causes the apparatus to perform the methods as described in any of the claims previous paragraphs.

BRIEF DESCRIPTION OF THE DRAWINGS

[0058] The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings, wherein:

[0059] FIG. 1 shows schematically an electronic device employing embodiments of the examples described herein.

[0060] FIG. 2 shows schematically a user equipment suitable for employing embodiments of the examples described herein.

[0061] FIG. 3 further shows schematically electronic devices employing embodiments of the examples described herein connected using wireless and wired network connections.

[0062] FIG. 4 shows schematically a block chart of an encoder on a general level.

[0063] FIG. 5 is a block diagram showing an interface between an encoder and a decoder in accordance with the examples described herein.

[0064] FIG. 6 illustrates a system configured to support streaming of media data from a source to a client device.

[0065] FIG. 7 is a block diagram of an apparatus that may be configured in accordance with an example embodiment.

[0066] FIG. 8 illustrates example structure of a neural network representation (NNR) bitstream and an NNR unit, in accordance with an embodiment.

[0067] FIG. 9 is an example apparatus configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream, in accordance with an embodiment. [0068] FIG. 10 is an example method for introducing a weight update compression interpretation into the NNR bitstream, in accordance with an embodiment.

[0069] FIG. 11 is an example method 1100 for defining a validation set performance, in accordance with an embodiment.

[0070] FIG. 12 is a block diagram of one possible and non-limiting system in which the example embodiments may be practiced.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

[0071] The following acronyms and abbreviations that may be found in the specification and/or the drawing figures are defined as follows: file format leneration Partnership Project technical specification character code generation of broadband cellular network technology generation cellular network technology are network acy dal intelligence labled IoT mown as

.s and mobility management function reed video coding ext-adaptive binary arithmetic coding division multiple access experiment al unit mic adaptive streaming over HTTP etc cosine transform il signal processor buted unit ¾d Node B (for example, an LTE base station) 'RA-NR dual connectivity or En-gNB providing NR user plane and control plane protocol nations towards the UE, and acting as secondary node in )C

¾d universal terrestrial radio access, for example, the LTE access technology ency division multiple access

-pattern bit string using n bits written (from left to right) the left bit first. ace between CU and DU control interface station for 5G/NR, for example, a node providing NR user

: and control plane protocol terminations towards the UE, onnected via the NG interface to the 5GC al System for Mobile communications

G-2 Systems is formally known as ISO/IEC 13818-1 and

U-T Rec. H.222.0 y of video coding standards in the domain of the ITU-T level syntax block copy ifier rational Electrotechnical Commission ute of Electrical and Electronics Engineers ace rated messaging device rt messaging service ret of things ret protocol rational Organization for Standardization rase media file format rational Telecommunication Union Telecommunication Standardization Sector photographic experts group term evolution rel-Ziv-Markov chain compression ie container format that can include both uncompressed and LZMA data rel-Ziv-Oberhumer compression rel-Ziv-Welch compression um access control aDataBox lity management entity media messaging service eBox ormat for MPEG-4 Part 14 files ng picture experts group

2/H.262 as defined by the ITU

» and video coding standard for ISO/IEC 14496 significant bit ark abstraction layer ompressed data unit generation generation eNB network network exchange format network representation radio (5G radio) ark

. Neural Network exchange col buffers mal computer mal digital assistant et data convergence protocol cal layer identifier line communication ble network graphics signal-to-noise ratio est access memory access network est for comments frequency identification link control resource control te radio head unit ver :e data adaptation protocol stochastic gradient descent ng gateway an management function messaging service erminated string encoded as UTF-8 characters as specified O/IEC 10646 ble video coding

’ace between eNodeBs and the EPC mission control protocol-internet protocol divisional multiple access iBox port stream lology under consideration ision mitter equipment ned integer Exp-Golomb-coded syntax element with the it first ersal Integrated Circuit Card ersal Mobile Telecommunications System

;ned integer using n bits plane function rm resource identifier rm resource locator Unicode Transformation Format ess local area network connecting interface between two eNodeBs in LTE network face between two NG-RAN nodes

[0072] Some embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which some, but not all, embodiments of the invention are shown. Indeed, various embodiments of the invention may be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will satisfy applicable legal requirements. Like reference numerals refer to like elements throughout. As used herein, the terms ‘data,’ ‘content,’ ‘information,’ and similar terms may be used interchangeably to refer to data capable of being transmitted, received and/or stored in accordance with embodiments of the present invention. Thus, use of any such terms should not be taken to limit the spirit and scope of embodiments of the present invention.

[0073] Additionally, as used herein, the term ‘circuitry’ refers to (a) hardware-only circuit implementations (e.g., implementations in analog circuitry and/or digital circuitry); (b) combinations of circuits and computer program product(s) comprising software and/or firmware instructions stored on one or more computer readable memories that work together to cause an apparatus to perform one or more functions described herein; and (c) circuits, such as, for example, a microprocessor s) or a portion of a microprocessor(s), that require software or firmware for operation even if the software or firmware is not physically present. This definition of ‘circuitry’ applies to all uses of this term herein, including in any claims. As a further example, as used herein, the term ‘circuitry’ also includes an implementation comprising one or more processors and/or portion(s) thereof and accompanying software and/or firmware. As another example, the term ‘circuitry’ as used herein also includes, for example, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, other network device, and/or other computing device.

[0074] As defined herein, a ‘computer-readable storage medium,’ which refers to a non- transitory physical storage medium (e.g., volatile or non-volatile memory device), can be differentiated from a ‘computer-readable transmission medium,’ which refers to an electromagnetic signal.

[0075] A method, apparatus and computer program product are provided in accordance with an example embodiment in order to implement one or more mechanisms for introducing a weight update compression interpretation into the neural network representation (NNR) bitstream.

[0076] The following describes in detail suitable apparatus and possible implementation of one or more mechanisms for introducing a weight update compression interpretation into the neural network representation (NNR) bitstream. In this regard reference is first made to FIG. 1 and FIG. 2, where FIG. 1 shows an example block diagram of an apparatus 50. The apparatus may be an Internet of Things (IoT) apparatus configured to perform various functions, for example, gathering information by one or more sensors, receiving or transmitting information, analyzing information gathered or received by the apparatus, or the like. The apparatus may comprise a video coding system, which may incorporate a codec. FIG. 2 shows a layout of an apparatus according to an example embodiment. The elements of FIG. 1 and FIG. 2 will be explained next. [0077] The electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system, a sensor device, a tag, or a lower power device. However, it would be appreciated that embodiments of the examples described herein may be implemented within any electronic device or apparatus which may process data by neural networks.

[0078] The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 may further comprise a display 32, e.g., in the form of a liquid crystal display, light emitting diode display, organic light emitting diode display, and the like. In other embodiments of the examples described herein the display may be any suitable display technology suitable to display media or multimedia content, for example, an image or a video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the examples described herein any suitable data or user interface mechanism may be employed. For example, the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.

[0079] The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the examples described herein may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery (or in other embodiments of the examples described herein the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The apparatus may further comprise a camera capable of recording or capturing images and/or video. The apparatus 50 may further comprise an infrared port for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/firewire wired connection.

[0080] The apparatus 50 may comprise a controller 56, a processor or processor circuitry for controlling the apparatus 50. The controller 56 may be connected to a memory 58 which in embodiments of the examples described herein may store both data in the form of image, audio data, video data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and/or decoding of audio, image, and/or video data or assisting in coding and/or decoding carried out by the controller.

[0081] The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a UICC and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network. [0082] The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals, for example, for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and/or for receiving radio frequency signals from other apparatus(es).

[0083] The apparatus 50 may comprise a camera 42 capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing. The apparatus may receive the video image data for processing from another device prior to transmission and/or storage. The apparatus 50 may also receive either wirelessly or by a wired connection the image for coding/decoding. The structural elements of apparatus 50 described above represent examples of means for performing a corresponding function.

[0084] With respect to FIG. 3, an example of a system within which embodiments of the examples described herein can be utilized is shown. The system 10 comprises multiple communication devices which can communicate through one or more networks. The system 10 may comprise any combination of wired or wireless networks including, but not limited to a wireless cellular telephone network (such as a GSM, UMTS, CDMA, LTE, 4G, 5G network, and the like), a wireless local area network (WLAN) such as defined by any of the IEEE 802.x standards, a Bluetooth® personal area network, an Ethernet local area network, a token ring local area network, a wide area network, and the Internet.

[0085] The system 10 may include both wired and wireless communication devices and/or apparatus 50 suitable for implementing embodiments of the examples described herein.

[0086] For example, the system shown in FIG. 3 shows a mobile telephone network 11 and a representation of the Internet 28. Connectivity to the Internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways.

[0087] The example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22. The apparatus 50 may be stationary or mobile when carried by an individual who is moving. The apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a bain, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.

[0088] The embodiments may also be implemented in a set-top box; for example, a digital

TV receiver, which may/may not have a display or wireless capabilities, in tablets or (laptop) personal computers (PC), which have hardware and/or software to process neural network data, in various operating systems, and in chipsets, processors, DSPs and/or embedded systems offering hardware/software based coding.

[0089] Some or further apparatus may send and receive calls and messages and communicate with service providers through a wireless connection 25 to a base station 24. The base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28. The system may include additional communication devices and communication devices of various types.

[0090] The communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol-internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, 3GPP Narrowband IoT and any similar wireless communication technology. A communications device involved in implementing various embodiments of the examples described herein may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection.

[0091] In telecommunications and data networks, a channel may refer either to a physical channel or to a logical channel. A physical channel may refer to a physical transmission medium such as a wire, whereas a logical channel may refer to a logical connection over a multiplexed medium, capable of conveying several logical channels. A channel may be used for conveying an information signal, for example a bitstream, from one or several senders (or transmitters) to one or several receivers.

[0092] The embodiments may also be implemented in so-called internet of things (IoT) devices. The IoT may be defined, for example, as an interconnection of uniquely identifiable embedded computing devices within the existing Internet infrastructure. The convergence of various technologies has and may enable many fields of embedded systems, such as wireless sensor networks, control systems, home/building automation, and the like, to be included the Internet of Things (IoT). In order to utilize Internet IoT devices are provided with an IP address as a unique identifier. The IoT devices may be provided with a radio transmitter, such as WLAN or Bluetooth transmitter or a RFID tag. Alternatively, IoT devices may have access to an IP-based network via a wired network, such as an Ethernet-based network or a power-line connection (PLC).

[0093] An MPEG-2 transport stream (TS), specified in ISO/IEC 13818-1 or equivalently in ITU-T Recommendation H.222.0, is a format for carrying audio, video, and other media as well as program metadata or other metadata, in a multiplexed stream. A packet identifier (PID) is used to identify an elementary stream (a.k.a. packetized elementary stream) within the TS. Hence, a logical channel within an MPEG-2 TS may be considered to correspond to a specific PID value.

[0094] Available media file format standards include ISO base media file format

(ISO/IEC 14496-12, which may be abbreviated ISOBMFF) and file format for NAL unit structured video (ISO/IEC 14496-15), which derives from the ISOBMFF.

[0095] Video codec consists of an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can decompress the compressed video representation back into a viewable form. A video encoder and/or a video decoder may also be separate from each other, for example, need not form a codec. Typically, encoder discards some information in the original video sequence in order to represent the video in a more compact form (e.g., at lower bitrate).

[0096] Typical hybrid video encoders, for example, many encoder implementations of

ITU-T H.263 and H.264, encode the video information in two phases. Firstly pixel values in a certain picture area (or ‘block’) are predicted, for example, by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, for example, the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (for example, Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate). [0097] In temporal prediction, the sources of prediction are previously decoded pictures

(a.k.a. reference pictures). In intra block copy (IBC; a.k.a. intra-block-copy prediction and current picture referencing), prediction is applied similarly to temporal prediction, but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter-layer or inter-view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.

[0098] Inter prediction, which may also be referred to as temporal prediction, motion compensation, or motion-compensated prediction, reduces temporal redundancy. In inter prediction the sources of prediction are previously decoded pictures. Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, for example, either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.

[0099] One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently when they are predicted first from spatially or temporally neighboring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.

[00100] FIG. 4 shows a block diagram of a general structure of a video encoder. FIG. 4 presents an encoder for two layers, but it would be appreciated that presented encoder may be similarly extended to encode more than two layers. FIG. 4 illustrates a video encoder comprising a first encoder section 500 for a base layer and a second encoder section 502 for an enhancement layer. Each of the first encoder section 500 and the second encoder section 502 may comprise similar elements for encoding incoming pictures. The encoder sections 500, 502 may comprise a pixel predictor 302, 402, prediction error encoder 303, 403 and prediction error decoder 304, 404. FIG. 4 also shows an embodiment of the pixel predictor 302, 402 as comprising an inter-predictor 306, 406, an intra predictor 308, 408, a mode selector 310, 410, a filter 316, 416, and a reference frame memory 318, 418. The pixel predictor 302 of the first encoder section 500 receives base layer image(s) 300 of a video stream to be encoded at both the inter-predictor 306 (which determines the difference between the image and a motion compensated reference frame ) and the intra-predictor 308 (which determines a prediction for an image block based only on the already processed parts of current frame or picture). The output of both the inter-predictor and the intra-predictor are passed to the mode selector 310. The intra-predictor 308 may have more than one intra-prediction modes. Hence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 310. The mode selector 310 also receives a copy of the base layer image 300. Correspondingly, the pixel predictor 402 of the second encoder section 502 receives enhancement layer image(s) 400 of a video stream to be encoded at both the inter-predictor 406 (which determines the difference between the image and a motion compensated reference frame) and the intra-predictor 408 (which determines a prediction for an image block based only on the already processed parts of current frame or picture). The output of both the inter-predictor and the intra-predictor are passed to the mode selector 410. The intra-predictor 408 may have more than one intra-prediction modes. Hence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 410. The mode selector 410 also receives a copy of the enhancement layer picture 400.

[00101] Depending on which encoding mode is selected to encode the current block, the output of the inter-predictor 306, 406 or the output of one of the optional intra-predictor modes or the output of a surface encoder within the mode selector is passed to the output of the mode selector 310, 410. The output of the mode selector 310, 410 is passed to a first summing device 321, 421. The first summing device may subtract the output of the pixel predictor 302, 402 from the base layer image 300/enhancement layer image 400 to produce a first prediction error signal 320, 420 which is input to the prediction error encoder 303, 403.

[00102] The pixel predictor 302, 402 further receives from a preliminary reconstructor 339,

439 the combination of the prediction representation of the image block 312, 412 and the output 338, 438 of the prediction error decoder 304, 404. The preliminary reconstructed image 314, 414 may be passed to the intra-predictor 308, 408 and to a filter 316, 416. The filter 316, 416 receiving the preliminary representation may filter the preliminary representation and output a final reconstructed image 340, 440 which may be saved in a reference frame memory 318, 418. The reference frame memory 318 may be connected to the inter-predictor 306 to be used as the reference image against which a future base layer image 300 is compared in inter-prediction operations. Subject to the base layer being selected and indicated to be source for inter-layer sample prediction and/or inter-layer motion information prediction of the enhancement layer according to some embodiments, the reference frame memory 318 may also be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer images 400 is compared in inter-prediction operations. Moreover, the reference frame memory 418 may be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer image 400 is compared in inter-prediction operations.

[00103] Filtering parameters from the filter 316 of the first encoder section 500 may be provided to the second encoder section 502 subject to the base layer being selected and indicated to be source for predicting the filtering parameters of the enhancement layer according to some embodiments.

[00104] The prediction error encoder 303, 403 comprises a transform unit 342, 442 and a quantizer 344, 444. The transform unit 342, 442 transforms the first prediction error signal 320, 420 to a transform domain. The transform is, for example, the DCT transform. The quantizer 344, 444 quantizes the transform domain signal, for example, the DCT coefficients, to form quantized coefficients.

[00105] The prediction error decoder 304, 404 receives the output from the prediction error encoder 303, 403 and performs the opposite processes of the prediction error encoder 303, 403 to produce a decoded prediction error signal 338, 438 which, when combined with the prediction representation of the image block 312, 412 at the second summing device 339, 439, produces the preliminary reconstructed image 314, 414. The prediction error decoder may be considered to comprise a dequantizer 346, 446, which dequantizes the quantized coefficient values, for example, DCT coefficients, to reconstruct the transform signal and an inverse transformation unit 348, 448, which performs the inverse transformation to the reconstructed transform signal wherein the output of the inverse transformation unit 348, 448 contains reconstructed block(s). The prediction error decoder may also comprise a block filter which may filter the reconstructed block(s) according to further decoded information and filter parameters.

[00106] The entropy encoder 330, 430 receives the output of the prediction error encoder

303, 403 and may perform a suitable entropy encoding/variable length encoding on the signal to provide error detection and correction capability. The outputs of the entropy encoders 330, 430 may be inserted into a bitstream, for example, by a multiplexer 508.

[00107] FIG. 5 is a block diagram showing the interface between an encoder 501 implementing neural network encoding 503, and a decoder 504 implementing neural network decoding 505 in accordance with the examples described herein. The encoder 501 may embody a device, software method or hardware circuit. The encoder 501 has the goal of compressing input data 511 (for example, an input video) to compressed data 512 (for example, a bitstream) such that the bitrate is minimized, and the accuracy of an analysis or processing algorithm is maximized. To this end, the encoder 501 uses an encoder or compression algorithm, for example to perform neural network encoding 503.

[00108] The general analysis or processing algorithm may be part of the decoder 504. The decoder 504 uses a decoder or decompression algorithm, for example to perform the neural network decoding 505 to decode the compressed data 512 (for example, compressed video) which was encoded by the encoder 501. The decoder 504 produces decompressed data 513 (for example, reconstructed data).

[00109] The encoder 501 and decoder 504 may be entities implementing an abstraction, may be separate entities or the same entities, or may be part of the same physical device.

[00110] The analysis/processing algorithm may be any algorithm, traditional or learned from data. In the case of an algorithm which is learned from data, it is assumed that this algorithm can be modified or updated, for example, by using optimization via gradient descent. One example of the learned algorithm is a neural network.

[00111] The method and apparatus of an example embodiment may be utilized in a wide variety of systems, including systems that rely upon the compression and decompression of media data and possibly also the associated metadata. In one embodiment, however, the method and apparatus are configured to compress the media data and associated metadata streamed from a source via a content delivery network to a client device, at which point the compressed media data and associated metadata is decompressed or otherwise processed. In this regard, FIG. 6 depicts an example of such a system 600 that includes a source 602 of media data and associated metadata. The source may be, in one embodiment, a server. However, the source may be embodied in other manners if so desired. The source is configured to stream boxes containing the media data and associated metadata to the client device 604. The client device may be embodied by a media player, a multimedia system, a video system, a smart phone, a mobile telephone or other user equipment, a personal computer, a tablet computer or any other computing device configured to receive and decompress the media data and process associated metadata. In the illustrated embodiment, boxes of media data and boxes of metadata are streamed via a network 606, such as any of a wide variety of types of wireless networks and/or wireline networks. The client device is configured to receive structured information containing media, metadata and any other relevant representation of information containing the media and the metadata and to decompress the media data and process the associated metadata (e.g. for proper playback timing of decompressed media data). [00112] An apparatus 700 is provided in accordance with an example embodiment as shown in FIG. 7. In one embodiment, the apparatus of FIG. 7 may be embodied by a source 602, such as a file writer which, in turn, may be embodied by a server, that is configured to stream a compressed representation of the media data and associated metadata. In an alternative embodiment, the apparatus may be embodied by a client device 604, such as a file reader which may be embodied, for example, by any of the various computing devices described above. In either of these embodiments and as shown in Fig. 7, the apparatus of an example embodiment includes, is associated with or is in communication with a processing circuitry 702, one or more memory devices 704, a communication interface 706, and optionally a user interface.

[00113] The processing circuitry 702 may be in communication with the memory device

704 via a bus for passing information among components of the apparatus 700. The memory device may be non-transitory and may include, for example, one or more volatile and/or non-volatile memories. In other words, for example, the memory device may be an electronic storage device (e.g., a computer readable storage medium) comprising gates configured to store data (e.g., bits) that may be retrievable by a machine (e.g., a computing device like the processing circuitry). The memory device may be configured to store information, data, content, applications, instructions, or the like for enabling the apparatus to carry out various functions in accordance with an example embodiment of the present disclosure. For example, the memory device may be configured to buffer input data for processing by the processing circuitry. Additionally or alternatively, the memory device may be configured to store instructions for execution by the processing circuitry.

[00114] The apparatus 700 may, in some embodiments, be embodied in various computing devices as described above. However, in some embodiments, the apparatus may be embodied as a chip or chip set. In other words, the apparatus may comprise one or more physical packages (e.g., chips) including materials, components and/or wires on a structural assembly (e.g., a baseboard). The structural assembly may provide physical strength, conservation of size, and/or limitation of electrical interaction for component circuitry included thereon. The apparatus may therefore, in some cases, be configured to implement an embodiment of the present disclosure on a single chip or as a single ‘system on a chip.’ As such, in some cases, a chip or chipset may constitute means for performing one or more operations for providing the functionalities described herein.

[00115] The processing circuitry 702 may be embodied in a number of different ways. For example, the processing circuitry may be embodied as one or more of various hardware processing means such as a coprocessor, a microprocessor, a controller, a digital signal processor (DSP), a processing element with or without an accompanying DSP, or various other circuitry including integrated circuits such as, for example, an ASIC (application specific integrated circuit), an FPGA (field programmable gate array), a microcontroller unit (MCU), a hardware accelerator, a special- purpose computer chip, or the like. As such, in some embodiments, the processing circuitry may include one or more processing cores configured to perform independently. A multi-core processing circuitry may enable multiprocessing within a single physical package. Additionally or alternatively, the processing circuitry may include one or more processors configured in tandem via the bus to enable independent execution of instructions, pipelining and/or multithreading.

[00116] In an example embodiment, the processing circuitry 702 may be configured to execute instructions stored in the memory device 704 or otherwise accessible to the processing circuitry. Alternatively or additionally, the processing circuitry may be configured to execute hard coded functionality. As such, whether configured by hardware or software methods, or by a combination thereof, the processing circuitry may represent an entity (e.g., physically embodied in circuitry) capable of performing operations according to an embodiment of the present disclosure while configured accordingly. Thus, for example, when the processing circuitry is embodied as an ASIC, FPGA or the like, the processing circuitry may be specifically configured hardware for conducting the operations described herein. Alternatively, as another example, when the processing circuitry is embodied as an executor of instructions, the instructions may specifically configure the processing circuitry to perform the algorithms and/or operations described herein when the instructions are executed. However, in some cases, the processing circuitry may be a processor of a specific device (e.g., an image or video processing system) configured to employ an embodiment of the present invention by further configuration of the processing circuitry by instructions for performing the algorithms and/or operations described herein. The processing circuitry may include, among other things, a clock, an arithmetic logic unit (ALU) and logic gates configured to support operation of the processing circuitry.

[00117] The communication interface 706 may be any means such as a device or circuitry embodied in either hardware or a combination of hardware and software that is configured to receive and/or transmit data, including video bitstreams. In this regard, the communication interface may include, for example, an antenna (or multiple antennas) and supporting hardware and/or software for enabling communications with a wireless communication network. Additionally or alternatively, the communication interface may include the circuitry for interacting with the antenna(s) to cause transmission of signals via the antenna(s) or to handle receipt of signals received via the antenna(s). In some environments, the communication interface may alternatively or also support wired communication. As such, for example, the communication interface may include a communication modem and/or other hardware/software for supporting communication via cable, digital subscriber line (DSL), universal serial bus (USB) or other mechanisms. [00118] In some embodiments, the apparatus 700 may optionally include a user interface that may, in turn, be in communication with the processing circuitry 702 to provide output to a user, such as by outputting an encoded video bitstream and, in some embodiments, to receive an indication of a user input. As such, the user interface may include a display and, in some embodiments, may also include a keyboard, a mouse, a joystick, a touch screen, touch areas, soft keys, a microphone, a speaker, or other input/output mechanisms. Alternatively or additionally, the processing circuitry may comprise user interface circuitry configured to control at least some functions of one or more user interface elements such as a display and, in some embodiments, a speaker, ringer, microphone and/or the like. The processing circuitry and/or user interface circuitry comprising the processing circuitry may be configured to control one or more functions of one or more user interface elements through computer program instructions (e.g., software and/or firmware) stored on a memory accessible to the processing circuitry (e.g., memory device, and/or the like).

[00119] Various devices and systems described in the FIGs. 1 to 7 enable mechanisms for implementing incremental weight update compression interpretation into a neural network representation bitstream. A neural network representation (NNR) describes compression of neural networks for efficient transport. Some example high-level sytax (HLS) relevant to the technologies of weight update compression, are described by various embodiments of the present invention.

[00120] FIG. 8 illustrates example structure of a neural network representation (NNR) bitstream 802 and an NNR unit 804a, in accordance with an embodiment. An NNR bitstream may conform to compression of neural networks for multimedia content description and analysis). NNR specifies a high-level bitstream syntax (HLS) for signaling compressed neural network data in a channel as a sequence of NNR units as illustrated in FIG. 8. As depicted in FIG. 8, according to this structure, an NNR bitstream 802 includes multiple elemental units termed NNR Units (e.g. NNR units 804a, 804b, 804c, ... 804n). An NNR Unit (e.g., the NNR unit 804a) represents a basic high-level syntax structure and includes three syntax elements: an NNR Unit Size 806, an NNR unit header 808, an NNR unit payload 810.

[00121] Each NNR unit may have a type that defines the functionality of the NNR Unit and allows correct interpretation and decoding procedures to be invoked. In an embodiment, NNR units may contain different types of data. The type of data that is contained in the payload of an NNR Unit defines the NNR Unit’s type. This type is specified in the NNR unit header. The following table specifies the NNR unit header types and their identifiers.

[00122] NNR unit is data structure for carrying neural network data and related metadata which is compressed or represented using this specification. NNR units carry compressed or uncompressed information about neural network metadata, topology information, complete or partial layer data, filters, kernels, biases, quantization weights, tensors, or the like. An NNR unit may include following data elements:

NNR unit size: This data element signals the total byte size of the NNR Unit, including the NNR unit size.

NNR unit header: This data element contains information about the NNR unit type and related metadata.

NNR unit payload: This data element contains compressed or uncompressed data related to the neural network.

[00123] NNR bitstream is composed of a sequence of NNR Units and/or aggregate NNR units. The first NNR unit in an NNR bitstream shall be an NNR start unit (e.g. NNR unit of type NNR_STR). [00124] Neural Network topology information can be carried as NNR units of type

NNR_TPL. Compressed NN information can be carried as NNR units of type NNR_NDU. Parameter sets can be carried as NNR units of type NNR_MPS and NNR_LPS. An NNR bitstream is formed by serializing these units.

[00125] Image and video codecs may use one or more neural networks at decoder side, either within the decoding loop or as a post-processing step, for both human-targeted and machine targeted compression.

[00126] Some of the example implementations proposed by various embodiments are described below:

[00127] NNR model parameter set unit header syntax:

[00128] NNR model parameter set unit payload syntax:

[00129] Quantization method identifiers for the case NN compression:

[00130] Some examples of data definitions and data types related to the various embodiments are described in following paragraphs:

[00131] ue(k): unsigned integer k-th order, e.g. Exp-Golomb-coded syntax element. The parsing process for this descriptor is according to the following pseudo-code with x as a result: x = 0 bit = 1 while( bit ) { bit = 1 - u( 1 ) x += bit « k k += 1

} k -= 1 if ( k > 0 ) x += u( k )

[00132] ie(k): signed integer k-th order, e.g. Exp-Golomb-coded syntax element. The parsing process for this descriptor is according to the following pseudo-code with x as a result: val = ue( k ) if( (val & 1) != 0 ) x = ((val+l)»l) else x = - (val»l)

[00133] A payload identifier may suggest the decoding method. Following table provides

NNR compressed data payload types:

[00134] topologylnformation about potential changes caused by a pruning algorithm is provided in nnr_topology_unit_payload():

[00135] nnr_pruning_topology_container() is specified as follows:

[00136] bit_mask() is specified as follows:

[00137] Various embodiments propose mechanisms for introducing weight update compression interpretation into the NNR bitstream. Some example proposals include mechanisms for:

Signalling incremental weight update compression mode of operation; Potential introduction of a weight update unit type among NNR unit types;

Signalling mechanisms required for dithering algorithms;

Signalling global random seed;

Signalling if a model is an inference friendly quantized model;

Signalling incremental weight update quantization algorithms;

Signalling federated averaging weight update algorithm;

Signalling supporting down-stream compression support;

Signalling asynchronous incremental weight update mode;

Source_id and operation_id to identify state of communicated information;

Global codebook approaches for weight update quantization;

Extension to some of the data payload types and payload;

Syntax and semantics of some quantization algorithms;

Encoding and decoding procedures of bitmask applicable to quantization algorithm outputs; and Syntax and semantics associated with topology change.

[00138] incremental_weight_update_flag: the incremental weight update flag is a flag that signals a decoder that the bitstream is corresponding to a weight update compression and not a weight compression. The incremental_weight_update_flag indicates to the decoder to invoke a correct decoding mechanism upon receiving the data and decode the correct payload types.

[00139] For example, when the incremental_weight_update_flag is set to value 1, it means that the NNR_QNT or NNR_NDU consist of a data specific to weight update compression and decompression algorithms. The same applies to the interpretation of other data units.

[00140] Incremental_weight_update_flag may be introduced into different locations in the existing NNR vl syntax and semantics. One suggested location may be nnr_model_parameter_set_header(), for example:

[00141] In an embodiment, nnr_model_parameter_set_header() may be stored in the

NNR payload data or its header.

[00142] NNR Weight Update Unit (NNR_WUU): a data unit of type NNR weight update compression data unit type may be an alternative to adapting the existing data units from NNR vl syntax, identified as NNR_WUU (NNR weight update unit). This data unit may contain information relevant to weight update strategies.

[00143] dithering_flag: to support dithering techniques in quantization, encoding and decoding pipelines, a flag, e.g., dithering_flag is introduced. For example, when dithering_flag is set to value 1, a random seed is present that may be used for all the computations. During the decoding process the client may use the random seed to generate a random sequence which will be used during the reconstruction of the quantized values.

[00144] random_seed: a global random seed may be required for some algorithms. For example, in dithering dependent algorithms, a global random seed may used. Some embodiments propose random seed to be part of the information to be signalled.

[00145] Inference_friendly_flag: in NN compression, a model may be inference friendly, e.g., its weight and/or activations may be quantized. In weight update compression, such methods may require specific algorithmic treatment. Accordingly, some embodiments propose signalling the presence of such models in the bitstream.

[00146] quantized_weight_update_flag: indicates when the weight updates are quantized or, instead, there has been no quantization involved. Alternatively, the quantization_algorithm_id may be used to indicate that no quantization algorithm was applied to the weight updates by defining an id for such a case.

[00147] quantization_algorithm_id: an algorithm identifier that is signalled for the weight update quantization. The decoder may use this information for performing a suitable dequantization operation. Example algorithms may include: [00148] An alternative example to quantization_algorithm_id may be that when the incremental_weight_update_flag indicates a weight update compression mode, the interpretation of mps_quantization_method_flags may be according to the quantization techniques for weight update compression. In this example, the quantization method identifiers may be interpreted or complemented with the identifiers relevant to the incremental weight update compression, e.g., the mapping of quantization method identifier to the actual quantization algorithm is performed by using a difference look-up table, such as the table above.

[00149] fed_alg_id: in case of federated algorithm, an agreed federated learning algorithm id may be signalled. Example of id may include FedAVG, FedProx, and the like. Another example usage may be for indicating a specific step, such as, enabling a specific loss function during training process.

[00150] For example, the fed_alg_id may take one of the values in the following table:

[00151] elapsed_time: is a data field that communicates the time passed from the last communication between two parties, the data field may be used from a server to a client communication or from the client to the server. The elapsed_time may be used in conjunction with a flag to determine the direction of the communication or in another embodiment, two elapsed_time data fields, one for each communication directions. In another embodiment, the elapsed_time may indicate the number of rounds of communication between the server and the client, instead of the duration that passed.

[00152] server_round_ID: specifies a unique identifier for the communication round from the server to one or more clients. The value of the identifier may be derived from the value that server_round_ID had in the previous communication round from the server to one or more clients, for example, it can be incremented by 1. [00153] client_round_ID specifies a unique identifier for the communication round from a client to a server. The identifier may be, for example, the same value that the server had previously signalled to the client, or a value which may be derived from the value that the server had previously signalled to the client (for example, an incremented value).

[00154] model_reference_ID is an ID that indicates what model may be used as a base model. The model_reference_ID may indicate a topology of the base model, or both the topology and an initialization of at least some of the weights of the base model. The training session may be performed by the client, by training the base model. Weight-updates may be derived from the weights of the base model before the training performed by the client and the weights of the base model after the training performed by the client. The model reference id may point to a URI or include a name identifier predefined and globally distributed, for example, to all participants.

[00155] weight_reference_ID specifies a unique identifier of the weights for a base model.

[00156] validation set performance: In a communication from a server to a client, the validation set performance may signal to the client a performance indication, determined based on a validation set. In a communication from the client to the server, the validation set performance may include an indication of what performance level a weight-update associated to this validation_set_performance may achieve, where the performance level may be determined based on a validation dataset present at client’s side. This may be informative for the server on how to use the received weight-update from that client. For example, the server may decide to multiply the received weight-updates from clients by using multiplier values derived from the validation_set_performance values received from clients. This information may be available on one side of the communications or both communication ends.

[00157] Copy_client_wu may be used in the bitstream sent by a client to a server, for indicating to use the latest weight-update received from this client as the new weight-update. In other words, after receiving this information, the server may copy the previous weight-update received from this client and re-use it as the current weight-update from this client. The client may not need to send the actual weight-update data which may be a replica of the previous weight-update.

[00158] Copy_server_wu may be used in the bitstream sent by a server to a client, for indicating to use the latest weight-update received from the server as the new weight-update from the server. This weight-update from the server may be a weight-update, which was obtained by aggregating one or more weight-updates received from one or more clients. In some other embodiment, this syntax element may be used for indicating to use the latest weights (instead of weight-update) received from the server as the new weights from the server. The server may not need to send the actual weight-update which may be a replica of the previous weight update.

[00159] dec_update may specify an update to a decoder neural network, where the decoder neural network may be a neural network that performs one of the operations for decoding a weight-update.

[00160] prob_update may specify an update to a probability model, where the probability model may be a neural network that estimates a probability to be used by a lossless decoder (such as an arithmetic decoder) for losslessly decoding a weight-update.

[00161] cache_enabled_flag may specify whether a caching mechanism is available and may be enabled to store weight updates on the server or on the client.

[00162] cache_depth may specify what is the number of cached sequences of weight updates that are stored. It may use to signal to what depth of stored data may an encoding or decoding process refer. The cache depth may be gated to save space in the bitstream, e.g., using cache_enabled_flag.

[00163] downstream_flag: This flag indicates whether downstream compression is used, where downstream refers to the communication direction from server to client(s). The server may or may not perform downstream compression depending on the configuration. This information may also be signaled at the session initialization. If downstream_flag is set to 1 , the receiver of the bitstream may need to perform a decompression operation on the received bitstream.

[00164] async_flag: depending on the mode of operation the clients may work in an asynchronous mode, that is after they upload their information to the server, they continue their training procedure and apply a specific treatment to the downstream information that they get. Similarly, server may require specific steps as receiving the information from clients to treat them. In such case, the async_flag may be communicated to indicate such operation is allowed if the clients have the capacity. This may also be done at the session initialization.

[00165] unique_operation_id: unique operation id allows communication of specific information, e.g., last time that the server and client met, and if necessary, some small synchronization information. Such information may be provided as a specific unique identifier consisting of some pieces of information specifically designed for each part of the communication, e.g., a specific client identifier, server identifier, elapsed time since last communication, etc. The information is not limited to the examples provided.

[00166] source_id: the source id is similar or substantially similar to the unique_operation id, it just indicates the identity of the source of the information, the source id may indicate the server or the client, depending on the value. The source_id may be defined as a flag to be interpreted as the communication direction or as a string identifier for providing more detailed information.

[00167] An example use case may be that the server may use this syntax element to correctly subtract from the global (aggregated weight update) a certain client’s weight update. In an example, assume that a federated learning session involves two clients and a server. The server initially sends the initial model to the two clients. Each client uses its own data for training the model for a number of iterations. Each client may compute a weight-update as the difference between the weights of the model after the training iterations and the weights of the latest model received from the server. In another embodiment, the weight-update may be output by an auxiliary neural network, where the inputs to the auxiliary neural network are the weights of the model after the training iterations and the weights of the latest model received from the server. Each client communicates the weight-update or a compressed version of the weight-update, by also signaling a unique identifier of the client within the source_id syntax element. The server may compute an aggregated weight-update, for example, by averaging all or some of the weight-updates received from the clients. The aggregated weight-update may be communicated to the clients. For one or more of the clients, the server may decide to communicate a custom version of the aggregated weight-update, where the weight-update from a certain client with ID X is subtracted from the aggregated weight-update, and the resulting custom aggregated weight-update is communicated to the respective client with ID X. Thus, in this example, source_id would contain the client ID X. The information in source_id may therefore be used to communicate the correct custom aggregated weight-update to the clients. In another embodiment, the server may use the aggregated weight-update for updating the model, and subtract the weight-update of a certain client from the weights of the updated model, and the resulting custom weights of the updated model may be communicated to that client.

[00168] Global_codebook: this is different than codebook-based quantization for NNR compression where the codebook is calculated and transferred with the NN. One global codebook may exist, and it is shared once with all the devices (e.g. clients and/or server) who are collaborating (sending or receiving a weight update). Such a codebook information may be shared once with all the participants in the computation process. For example, in an implementation a global_codebook() may be shared, distributed, or hosted in a remotely accessible network location. [00169] In another embodiment, such a codebook may be further compressed by some quantization algorithm since it represents weight update approximations.

[00170] Following example describes implementations of some of the proposed embodiments:

[00171] global_codebook(): provides a shared codebook that may be defined as follows:

[00172] Number_of_elements: provides number of elements in the codebook

[00173] Codebook_value: provides a value corresponding to the codebook element

[00174] In another embodiment, the global codebook may be defined based on a compressed codebook, for example:

[00175] step_value: the quantization step for the codebook.

[00176] quantized_codebook_value: is the uniform quantized value of a floating codebook_value obtained by floor(codebook_value/step_value).

[00177] For a compressed global codebook, a codebook_value[i] = step_value* quantized_codebook_value[i] is calculated after decoding the global codebook. [00178] wu_pred_coeffs: this syntax element may be a list of coefficients to be used for predicting a weight-update from one or more previously decoded weight-updates. This syntax element may be used for example by a server, for predicting a weight-update of a client, given one or more previously decoded weight-updates from that client and one or more previously decoded weight- updates from one or more other clients.

[00179] wu_pred_wuids: this syntax element may be a list of IDs which identify uniquely one or more previously decoded weight-updates to be used for predicting the weight-update of a client. In an alternative implementation, this syntax element may be a list of tuples, where each tuple includes a first element which is an identifier of a client and a second element which is an identifier of the weight-update of the client identified by the first element.

[00180] wu_pred_mode_id: this syntax element may indicate what algorithm or more to be used for predicting a weight-update from one or more previously decoded weight-updates. This syntax element may be used for example by a server, for predicting a weight-update of a client, given one or more previously decoded weight-updates from that client and one or more previously decoded weight-updates from one or more other clients. For example, one algorithm ID may indicate to use a linear combination of previously decoded weight-updates, where the coefficients for the linear combination may be indicated by wu_pred_coeffs and where the previously decoded weight-updates to be used for the prediction may be indicated by wu_pred_wuids.

[00181] Embodiment related to distributed data processing environments or architectures:

[00182] As an alternative to a high-level syntax-based communication, the model parameter set information may be shared via some common storage in which a unique identifier may be used to determine the correct parameters and payloads. Such identifier may include a specific hash id, a time-stamped id that may be used by a server and/or a client to determine the correct payload for orderly processing of information.

[00183] Extension to NNR compressed data payload types

[00184] In an embodiment, it is proposed to add an incremental weight update type to the type of NNR compressed data payload types, as described in following table. The incremental weight update type may invoke the necessary encoding and decoding procedures for a specific algorithm.

[00185] NNR compressed data payload types

[00186] Further, it is proposed to add data structures to the compressed data unit payload.

The payload in combination with the quantization algorithm may result in the proper encoding and may invoke the proper decoding mechanism for weight updates. One example implementation is described in the following table:

[00187] incremental_weight_update_payload(): provides the correct data formats and invokes the necessary decoding procedures for the incremental weight update payload.

[00188] An Example implementation of the incremental weight update payload: [00189] Incremental_weight_update_payload(): is an abstraction that may include the semantics and encoded bitstream of a specific algorithms, or may include a pointer to a decoding mechanism that need to be invoked.

[00190] As an example, a compressed payload may be implemented, as described in the following table:

[00191] In yet another embodiment, on the decoder side, incremental_weight_update_payload() may trigger a specific decoding mechanism where quantization_algorithm_id and NNR_PT_INCWU determine the decoding procedure according to the encoding procedure.

[00192] Syntax and semantics of quantization algorithms [00193] Different algorithms may result in different payload that necessities different encoding/decoding procedures. The output of the quantization may be directly outputted as efficient bitmask representations or encoded using some proper encoding mechanism, the example of such encoding mechanisms may include, a run-length encoding, a position encoding and RLE, a relative significant bit position encoding, or a combination of encoding mechanisms e.g., golomb-based encoding of relative RLE encoding. An example syntax for various quantization algorithms may be defined as following:

[00194] Sign sgd quantization: using a sign sgd produces a bitmask indicating changes in the weight update compressions, the payload may be the following:

[00195] sign_sgd_quant_payload(): defines the payload for the signSGD qauntization, multiple implementations are possible, e.g., plane bitmask, in this example a bitmask_size may indicate the size of bitmask and the bit representation of the mask are transferred. Following may be an example implementation:

[00196] Bit_mask_size: indicates size of bitmask. The size of bitmask descriptor may be gated by some flag to allow variable length bitmask sizes.

[00197] Bit_mask_values: represents an array of bit values in the bitmask.

[00198] scaled_binary_quant_payload(): represents the semantics for scaled binary quantization of weight updates. In this method each weight update may be represented by a nonzero mean of values in the strongest direction (positive or negative). Accordingly, a mean value and a bitmask indicating the non-zero values may be transferred.

[00199] The bitmask may be further encoded using some suitable compression mechanism such as RLE, golomb, golomb rice, position encoding or a combination of the techniques.

[00200] singIe_scaIe_ternary_quant_payIoad (): A single scale ternary quantization algorithm produces one scale value that reflects the amount of weight update and a mask that indicates the direction of change, which may be positive, negative or no change. The semantics for single scale ternary quantization of weight updates may be a bitmask and a mean value. In an example approach, both positive and negative directions, zero locations, and one mean of non-zeros for both directions may be encoded. The example is described in the table below, where two bits are used to indicate direction.

[00201] double_scale_ternary_quant_payload(): A double scale ternary quantization is an algorithm that produces scale values in both positive and negative direction. In other words, we communicate two mean_values. For such a method the payload may be similar or substantially similar to singIe_scaIe_ternary_quant_payIoad()’ but two mean values are communicated. Following may be an example implementation:

[00202] Alternatively, instead of a 2-bit bitmask_value, the syntax may include the following:

4=0; i < num_weight_updates; i++) { bitmask_nonzero_flag [i] if (bitmask_nonzero_flag [i] ) bitmask_sign_flag [i]

[00203] The semantics may be specified as follows:

[00204] global_codebook_quant_payload(): the global codebook quantization mode allows signalling an index corresponding to the values of a partition. In this approach a list of indexes is communicated. The possible design may include following items: number_of_indices: the total number of indices list_of_indexes: the indexes to the codebook elements of the quantization codebook

[00205] In an embodiment, such a global codebook may operate on a chunk of data rather than each weight update element. An example design for a channel-wise partition with maximum 2048 channels and a codebook of size 256 may be as following:

[00206] The global codebook may be further compressed using an entropy coding approach to gain further compression.

[00207] In yet another embodiment, as the number of partitions in each neural network may be different, the descriptor size may be gated to dynamically adjust the size of the codebook payload. The same may apply to the descriptor size of the list of indexes.

[00208] In another embodiment, for ternary family of quantization techniques, two bitmasks may be encoded instead of a two-bit bitmask.

[00209] In another embodiment, for the family of scaled quantization techniques, the scales may be further compressed using some other quantization technique, e.g., a uniform quantization with the scale step agreed only once. This further allows reducing the number of bits for representing the scales. Other quantization techniques are possible, e.g., when multiple scales exist for one tensor or in an aggregated mode where all the scales of all the tensors are put together. In another embodiment, only an update to the scale(s) is signalled, such as the difference between the previously-signaled scale(s) and the current scale(s).

[00210] Encoding / Decoding of bitmasks

[00211] In an embodiment, a portion of the quantization algorithms for weight update compression, for example, an essential portion may be signalled as bitmask. To further compress the bitmasks, encoding may be performed on bitmasks. In the case of weight-update compression, the bitmasks may be representing binary or ternary representations depending on the quantization algorithm. Such bitmasks may be encoded in several ways to further obtain compressibility. A proper encoding and decoding mechanism may be invoked at the encoder and decoder to interpret the bitmask. Some possibilities may include:

[00212] Run-Length encoding: in some example cases, the bitmasks may be highly sparse, in such examples, run-length encoding variants may be applied to further compress the bitmasks. For example, the following table depicts a run_len encoded payload for a bitmask:

[00213] In yet another embodiment, an average length of the runs may be estimated, and this may be used to determine the number of bits for run_size using Iog2(average_run_Iegnth), where Iog2 is the logarithm in basis 2. In this embodiment, a length of the descriptor may be signalled or a bit width of the run_size and run_Iegnth descriptors may be adjusted by using a gating mechanism.

[00214] In the decoder side, the run-length encoded data may be parsed and decoded according to the encoding convention to populate a decompressed bitmask.

[00215] Alternatively, a count of consecutive zeros between each pair of bits equal to 1 may be coded, by using the following example syntax:

[00216] run_length: represents the number of times the value of 0 is repeated before the next value of 1.

[00217] Position/length-encoding: The bitmasks may be further compressed by signalling the length between 0s or Is. In such an example, a bit mask may be converted to a list of integers indicating the location Is or 0s depending on which number is more populated in the bitmask. This may be similar to run-length but since there is only two run_vlaues, a chosen convention may be signalled once.

[00218] run_convention: may signal whether the length-encoding is signalling the number of zeros between ones or the number of ones between zeros.

[00219] The length encoded stream may be further compressed either using entropy coding, e.g., CABAC-based approaches or some other mechanism, e.g., golomb encoding.

[00220] GOLOMB encoded payload

[00221] A bitmask may be encoded using Golomb encoding. Following table provides an example of the semantics of the payload:

[00222] The length of the descriptors is provided as an example and longer or shorter length may be used.

[00223] encoded_stream_size: indicates the total number of bits representing a bitmask after being encoded using Golomb encoding.

[00224] golomb_encoded_bit: indicates the bit value of the encoded bitmask.

[00225] Encoding of golomb encoded data: The operation of obtaining a golomb encoded data stream may need agreement on a convention. For example, during encoding, by adopting an exp- golomb encoding, the process may be defined as processing each byte of the bitmask as an integer and encode it using the ue(k) definition of NNR spec text as unsigned integer k-th order exp-golomb to generated the golomb_encoded stream.

[00226] Decoding of golomb encoded data: is used when the decoding procedure is invoked to reconstruct the original bitmask. For example, each byte of the stream is first decoded using ue(k) decoding procedure and the bytes are put together in order to reconstruct the original bitmask. For a one byte definition k = 8, other k values may be possible. In yet another embodiment, other golomb-based encoding procedures may applied, e.g., RLE-golomb, golomb-rice, and the like.

[00227] The golomb encoded bitstream may be complemented with some extra bits, e.g., one bit to indicate the sign of the mean value, when extra information is required.

[00228] The golomb encoding, e.g., the exponential, may apply to a position encoded bitmasks or other type of payloads obtained from a quantization scheme.

[00229] Topology element referencing

[00230] While referencing topology elements, unique identifiers may be used. These unique identifiers may be indexes that map to a list of topology elements. In order to signal such elements, a new topology payload identifier may be used. As an example, NNR_TPL_REFLIST may be used as a name of such an identifier that maps to a topology storage format value in the NNR topology payload unit or header. It should be noted that in the examples described below, descriptor types are given as examples, and any fixed length or variable length data type may be utilized.

[00231] nnr_topology_unit_payload may be extended as follows:

[00232] In another embodiment, topo!ogy_data may be used together with the topology_elements_ids_list(0), rather than being mutually exclusive.

[00233] topology_elements_ids_list (flag) may store the topology elements or topology element indexes. Flag value may set the mode of operation. For example, if the flag is 0, unique topology element identifiers may be listed. When the flag is 1, unique indexes of the topology elements which are stored in the payload with the type NNR_TPL_REFLIST may be listed. Each index may indicate the order or presence of the topology element in the indicated topology payload.

[00234] topology_elem_id_index_list may specify a list of unique indexes related to the topology elements listed in topology information with payload type NNR_TPL_REFLIST. The first element in the topology may have the index value of 0.

[00235] Selection of the mode of topology element referencing may be signaled in the

NNR model parameter set, with a flag. Suh a flag may be named as mps_topology_indexed_reference_flag and the following syntax elements may be included in the NNR model parameter set:

[00236] mps_topology_indexed_reference_flag may specify whether topology elements are referenced by unique index. When set to 1 , topology elements may be represented by their indexes in the topology data defined by the topology payload of type NNR_PTL_REFLIST. This flag may be set to 0 when topology information is obtained via topology_data syntx element of NNR topology unit. [00237] In order to store topology element identifiers, NNR compressed data unit header syntax may be extended as follows:

[00238] topology_elem_id_index may specify a unique index value of a topology element which is signaled in topology information of payload type NNR_TPL_REFLIST. The first index may be 0 (e.g. 0-indexed).

[00239] element_id_index may specify a unique index that is used to reference a topology element.

[00240] Nnr_pruning_topology_container() may be extened to support index based topology element referencing as follows:

[00241] element_id_index may specify a unique index that is used to reference a topology element.

[00242] Any topology element referencing can be done either as a unique id or an index referencing.

[00243] Signalling topology changes

[00244] Topology_element_id: is a unique identifier that may define an element of topology. The naming of the topology_element_id may include an execution order to determine the relation of one topology_element_id to other topology_element_ids. [00245] Execution order: each element in topology element may include an order of execution that allows the execution and inference of the NN inference. The execution order may be gated to allow a pre-determined sequence of executions, e.g., a plane feed-forward execution.

[00246] Execution_list: may contain a list of topology_element_id to be executed as a sequence after each other.

[00247] The existing nnr_prune_topology_container() explained in may be used to signal the changes in topology caused by a pruning algorithm for NNR compression. In this example, topology changes due to the change in a task or during weight update compression may be required to be signaled.

[00248] In one embodiment, once the incremental_weight_update_flag is set to a value indicating weight update mode of operation the same nnr_prune_topology_container() approach may be used to signal the changes in the topology.

[00249] prune_strucutre: may signal the information about the type of a structure that may be pruned or neglected during information encoding, the prune structure may refer to a layer, a channel in convolution layer, a row, a column, or a specific block pattern in a matrix. This information may be gated when there is only one type of structure to ignore, which often, may be agreed by using only one encoding/decoding convention.

[00250] ignore_strucutre: may signal whether a specific structure is pruned or dropped, e.g., a layer. For example, having ignore_structure value 1 means a layer is not encoded in the bitstream or a specific block patter is not encoded in the bitstream.

[00251] Encoding information with regard to prune_structure and ignore_structure: at the beginning of the encoding some piece of information about the prune_structure is signalled, when the specific structure meets a specific condition, e.g., all the weight values or weight update values of a layer are zero. Then the ignore_strucutre may be sent at the beginning of each pattern to mention the specific structure is ignored or included.

[00252] decoding and reconstruction: after decoding the reconstruction uses the prune_strucutre and ignore_strucutre to reconstruct the original data.

[00253] In an alternative embodiment, a specific mechanism that requires a new topology container is proposed. [00254] NNR_TPL_WUPD: NNR topology weight update may be defined as a topology storage format to indicate a topology update associated with a weight update.

[00255] topologyNecessary payload and decoding procedures may be invoked, when the

NNR_TPL_WUPD payload is present in the nnr_topology_unit_payload. The payload corresponding to the NNR_TPL_WUPD may include: num_element_ids: represents a number of elements for which a topology modification is signaled. element_ids: represents an array of identifiers, where each identifier corresponds to a specific element that may be modified in the topology in consequence of a topology modification. weigt_tensor_dimension: is a list of lists, where each internal list is a list of new dimensions of the weight vector corresponding to the respective element id in element_ids. reorganize_flag: is a flag to indicate when the existing weight vector may be reorganized according to the new dimensions or a corresponding weight vector may be provided via some NNR data payload. The payload may contain a mapping to indicate how a new weight tensor is obtained from the existing weight tensor, when the reorganize flag signals a reorganization. weight_mapping: it is a mapping that indicates how an existing weight is mapped to a new topology element in consequence of dimension changes of the element. Such mapping may be a bitmask with specific processing order to indicate which weight are kept at which locations in the new weight tensor. For example, by using a row major matrix processing. The bitmask may be further compressed with some of the previously mentioned bitmask encoding schemes. topology_compressed: is used to indicate that the information associated with topology update may be compressed or follows a specific encoding and decoding procedure to be invoked to decode the topology information.

[00256] FIG. 9 is an example apparatus 900, which may be implemented in hardware, configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set, based on the examples described herein. Some example of the apparatus 900 include, but are not limited to, apparatus 50, client device 604, and apparatus 700. The apparatus 900 comprises a processor 902, at least one non- transitory memory 904 including computer program code 905, wherein the at least one memory 904 and the computer program code 905 are configured to, with the at least one processor 902, cause the apparatus 900 to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set 906, based on the examples described herein. [00257] The apparatus 900 optionally includes a display 908 that may be used to display content during rendering. The apparatus 900 optionally includes one or more network (NW) interfaces (I/F(s)) 180. The NW I/F(s) 910 may be wired and/or wireless and communicate over the Internet/other network(s) via any communication technique. The NW I/F(s) 910 may comprise one or more transmitters and one or more receivers. The N W I/F(s) 910 may comprise standard well-known components such as an amplifier, filter, frequency-converter, (de)modulator, and encoder/decoder circuitry(ies) and one or more antennas.

[00258] The apparatus 900 may be a remote, virtual or cloud apparatus. The apparatus 900 may be either a coder or a decoder, or both a coder and a decoder. The at least one memory 904 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory, and removable memory. The at least one memory 904 may comprise a database for storing data. The apparatus 900 need not comprise each of the features mentioned, or may comprise other features as well. The apparatus 900 may correspond to or be another embodiment of the apparatus 50 shown in FIG. 1 and FIG. 2, or any of the apparatuses shown in FIG. 3. The apparatus 900 may correspond to or be another embodiment of the apparatuses shown in FIG. 12, including UE 80, RAN node 170, or network element(s) 190.

[00259] FIG. 10 is an example method 1000 for introducing a weight update compression interpretation into the NNR bitstream, in accordance with an embodiment. As shown in block 906 of FIG. 9, the apparatus 900 includes means, such as the processing circuitry 902 or the like, for implementing mechanisms for introducing a weight update compression interpretation into the NNR bitstream. At 1002, the method 1000 includes encoding or decoding a high-level bitstream syntax for at least one neural network. At 1004, the method 1000 includes, wherein wherein the high-level bitstream syntax comprises at least one information unit, wherein the at least one information unit comprises syntax definitions for the at least one neural network or a portion of the at least one neural network. At 1006, the method 1000 includes, wherein a neural network representation (NNR) bitstream comprises one or more of the at least one information units. At 1008, the method 1000 includes, wherein the syntax definitions provide one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream.

[00260] In an embodiment, the one or more mechanisms may include at least one of a mechanism to signal an incremental weight update compression mode of operation, a mechanism to introduce a weight update unit type among the at least one information unit, a mechanism to signal mechanisms required for dithering algorithms, a mechanism to signal a global random seed, a mechanism to signal whether a model comprises an inference friendly quantized model, a mechanism to signal incremental weight update quantization algorithms, a mechanism to signal federated averaging weight update algorithm, a mechanism to signal supporting down-stream compression support, a mechanism to signal an asynchronous incremental weight update mode, a mechanism to identify a source of information, a mechanism to identify an operation, a mechanism to define global codebook approaches for a weight update quantization, a mechanism to define extension to one or more data payload types, a mechanism to define extension to a payload, a mechanism to define a syntax and semantics of one or more quantization algorithms, a mechanism to identify encoding and decoding procedures of bitmask applicable to quantization algorithm outputs, or a mechanism to identify a syntax and semantics relevant to a topology change.

[00261] FIG. 11 is an example method 1100 for defining a validation set performance, in accordance with an embodiment. As shown in block 906 of FIG. 9, the apparatus 900 includes means, such as the processing circuitry 902 or the like, for a validation set performance. At 1102, the method 1100 includes defining a validation set performance wherein the validation set performance comprises or specifies one or more of the following. At 1104, the method 1100 includes, wherein the validation set performance includes a performance indication determined based on a validation set. Additionally or alternatively, at 1106, the method 1100 includes, wherein the validation set performance includes indication of a performance level achieved by a weight-update associated with the validation set performance.

[00262] In an embodiment, the validation set performance provides information on how to use the weight-update received from a device. In an example, the weight-updates are multiplied by multiplier values derived from the validation set performance values received from the device.

[00263] In an embodiment, the method 1100 may also include defining a weight reference

ID, where the weight reference ID uniquely identifies weights for a base model.

[00264] In an embodiment, the method 1100 may also include defining a source ID, where the source ID uniquely identifies a source of information.

[00265] Turning to FIG. 12, this figure shows a block diagram of one possible and non limiting example in which the examples may be practiced. A user equipment (UE) 110, radio access network (RAN) node 170, and network element(s) 190 are illustrated. In the example of FIG. 1, the user equipment (UE) 110 is in wireless communication with a wireless network 100. A UE is a wireless device that can access the wireless network 100. The UE 110 includes one or more processors 120, one or more memories 125, and one or more transceivers 130 interconnected through one or more buses 127. Each of the one or more transceivers 130 includes a receiver, Rx, 132 and a transmitter, Tx, 133. The one or more buses 127 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, and the like. The one or more transceivers 130 are connected to one or more antennas 128. The one or more memories 125 include computer program code 123. The UE 110 includes a module 140, comprising one of or both parts 140-1 and/or 140-2, which may be implemented in a number of ways. The module 140 may be implemented in hardware as module 140-1, such as being implemented as part of the one or more processors 120. The module 140-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array. In another example, the module 140 may be implemented as module 140-2, which is implemented as computer program code 123 and is executed by the one or more processors 120. For instance, the one or more memories 125 and the computer program code 123 may be configured to, with the one or more processors 120, cause the user equipment 110 to perform one or more of the operations as described herein. The UE 110 communicates with RAN node 170 via a wireless link 111.

[00266] The RAN node 170 in this example is a base station that provides access by wireless devices such as the UE 110 to the wireless network 100. The RAN node 170 may be, for example, a base station for 5G, also called New Radio (NR). In 5G, the RAN node 170 may be a NG- RAN node, which is defined as either a gNB or an ng-eNB. A gNB is a node providing NR user plane and control plane protocol terminations towards the UE, and connected via the NG interface to a 5GC (such as, for example, the network element(s) 190). The ng-eNB is a node providing E-UTRA user plane and control plane protocol terminations towards the UE, and connected via the NG interface to the 5GC. The NG-RAN node may include multiple gNBs, which may also include a central unit (CU) (gNB-CU) 196 and distributed unit(s) (DUs) (gNB-DUs), of which DU 195 is shown. Note that the DU may include or be coupled to and control a radio unit (RU). The gNB-CU is a logical node hosting radio resource control (RRC), SDAP and PDCP protocols of the gNB or RRC and PDCP protocols of the en-gNB that controls the operation of one or more gNB-DUs. The gNB-CU terminates the FI interface connected with the gNB-DU. The FI interface is illustrated as reference 198, although reference 198 also illustrates a link between remote elements of the RAN node 170 and centralized elements of the RAN node 170, such as between the gNB-CU 196 and the gNB-DU 195. The gNB- DU is a logical node hosting RLC, MAC and PHY layers of the gNB or en-gNB, and its operation is partly controlled by gNB-CU. One gNB-CU supports one or multiple cells. One cell is supported by only one gNB-DU. The gNB-DU terminates the FI interface 198 connected with the gNB-CU. Note that the DU 195 is considered to include the transceiver 160, for example, as part of a RU, but some examples of this may have the transceiver 160 as part of a separate RU, for example, under control of and connected to the DU 195. The RAN node 170 may also be an eNB (evolved NodeB) base station, for LTE (long term evolution), or any other suitable base station or node. [00267] The RAN node 170 includes one or more processors 152, one or more memories

155, one or more network interfaces (N/W I/F(s)) 161, and one or more transceivers 160 interconnected through one or more buses 157. Each of the one or more transceivers 160 includes a receiver, Rx, 162 and a transmitter, Tx, 163. The one or more transceivers 160 are connected to one or more antennas 158. The one or more memories 155 include computer program code 153. The CU 196 may include the processor(s) 152, memories 155, and network interfaces 161. Note that the DU 195 may also contain its own memory/memories and processor(s), and/or other hardware, but these are not shown.

[00268] The RAN node 170 includes a module 150, comprising one of or both parts 150-

1 and/or 150-2, which may be implemented in a number of ways. The module 150 may be implemented in hardware as module 150-1, such as being implemented as part of the one or more processors 152. The module 150-1 may be implemented also as an integrated circuit or through other hardware such as a programmable gate array. In another example, the module 150 may be implemented as module 150-2, which is implemented as computer program code 153 and is executed by the one or more processors 152. For instance, the one or more memories 155 and the computer program code 153 are configured to, with the one or more processors 152, cause the RAN node 170 to perform one or more of the operations as described herein. Note that the functionality of the module 150 may be distributed, such as being distributed between the DU 195 and the CU 196, or be implemented solely in the DU 195.

[00269] The one or more network interfaces 161 communicate over a network such as via the links 176 and 131. Two or more gNBs 170 may communicate using, for example, link 176. The link 176 may be wired or wireless or both and may implement, for example, an Xn interface for 5G, an X2 interface for LTE, or other suitable interface for other standards.

[00270] The one or more buses 157 may be address, data, or control buses, and may include any interconnection mechanism, such as a series of lines on a motherboard or integrated circuit, fiber optics or other optical communication equipment, wireless channels, and the like. For example, the one or more transceivers 160 may be implemented as a remote radio head (RRH) 195 for LTE or a distributed unit (DU) 195 for gNB implementation for 5G, with the other elements of the RAN node 170 possibly being physically in a different location from the RRH/DU, and the one or more buses 157 may be implemented in part as, for example, fiber optic cable or other suitable network connection to connect the other elements (for example, a central unit (CU), gNB-CU) of the RAN node 170 to the RRH/DU 195. Reference 198 also indicates those suitable network link(s). [00271] It is noted that description herein indicates that ‘cells’ perform functions, but it should be clear that equipment which forms the cell may perform the functions. The cell makes up part of a base station. That is, there can be multiple cells per base station. For example, there may be three cells for a single carrier frequency and associated bandwidth, each cell covering one-third of a 360 degree area so that the single base station’s coverage area covers an approximate oval or circle. Furthermore, each cell can correspond to a single carrier and a base station may use multiple carriers. So if there are three 120 degree cells per carrier and two carriers, then the base station has a total of 6 cells.

[00272] The wireless network 100 may include a network element or elements 190 that may include core network functionality, and which provides connectivity via a link or links 181 with a further network, such as a telephone network and/or a data communications network (for example, the Internet). Such core network functionality for 5G may include access and mobility management function(s) (AMF(S)) and/or user plane functions (UPF(s)) and/or session management function(s) (SMF(s)). Such core network functionality for LTE may include MME (Mobility Management Entity)/SGW (Serving Gateway) functionality. These are merely example functions that may be supported by the network element(s) 190, and note that both 5G and LTE functions might be supported. The RAN node 170 is coupled via a link 131 to the network element 190. The link 131 may be implemented as, for example, an NG interface for 5G, or an SI interface for LTE, or other suitable interface for other standards. The network element 190 includes one or more processors 175, one or more memories 171, and one or more network interfaces (N W I/F(s)) 180, interconnected through one or more buses 185. The one or more memories 171 include computer program code 173. The one or more memories 171 and the computer program code 173 are configured to, with the one or more processors 175, cause the network element 190 to perform one or more operations.

[00273] The wireless network 100 may implement network virtualization, which is the process of combining hardware and software network resources and network functionality into a single, software-based administrative entity, a virtual network. Network virtualization involves platform virtualization, often combined with resource virtualization. Network virtualization is categorized as either external, combining many networks, or parts of networks, into a virtual unit, or internal, providing network-like functionality to software containers on a single system. Note that the virtualized entities that result from the network virtualization are still implemented, at some level, using hardware such as processors 152 or 175 and memories 155 and 171, and also such virtualized entities create technical effects.

[00274] The computer readable memories 125, 155, and 171 may be of any type suitable to the local technical environment and may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The computer readable memories 125, 155, and 171 may be means for performing storage functions. The processors 120, 152, and 175 may be of any type suitable to the local technical environment, and may include one or more of general purpose computers, special purpose computers, microprocessors, digital signal processors (DSPs) and processors based on a multi-core processor architecture, as non limiting examples. The processors 120, 152, and 175 may be means for performing functions, such as controlling the UE 110, RAN node 170, network element(s) 190, and other functions as described herein.

[00275] In general, the various embodiments of the user equipment 110 can include, but are not limited to, cellular telephones such as smart phones, tablets, personal digital assistants (PDAs) having wireless communication capabilities, portable computers having wireless communication capabilities, image capture devices such as digital cameras having wireless communication capabilities, gaming devices having wireless communication capabilities, music storage and playback appliances having wireless communication capabilities, Internet appliances permitting wireless Internet access and browsing, tablets with wireless communication capabilities, as well as portable units or terminals that incorporate combinations of such functions.

[00276] One or more of modules 140-1, 140-2, 150-1, and 150-2 may be configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set, based on the examples described herein. Computer program code 173 may also be configured to implement one or more mechanisms for introducing a weight update compression interpretation into the NNR bitstream or define a validation performance set, based on the examples described herein.

[00277] As described above, FIGs. 10 and 11 include a flowcharts of an apparatus (e.g. 50,

602, 604, 700, or 900), method, and computer program product according to certain example embodiments. It will be understood that each block of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by various means, such as hardware, firmware, processor, circuitry, and/or other devices associated with execution of software including one or more computer program instructions. For example, one or more of the procedures described above may be embodied by computer program instructions. In this regard, the computer program instructions which embody the procedures described above may be stored by a memory (e.g. 58, 125, 704, or 904) of an apparatus employing an embodiment of the present invention and executed by processing circuitry (e.g. 56, 120, 702 or 902) of the apparatus. As will be appreciated, any such computer program instructions may be loaded onto a computer or other programmable apparatus (e.g., hardware) to produce a machine, such that the resulting computer or other programmable apparatus implements the functions specified in the flowchart blocks. These computer program instructions may also be stored in a computer- readable memory that may direct a computer or other programmable apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture, the execution of which implements the function specified in the flowchart blocks. The computer program instructions may also be loaded onto a computer or other programmable apparatus to cause a series of operations to be performed on the computer or other programmable apparatus to produce a computer-implemented process such that the instructions which execute on the computer or other programmable apparatus provide operations for implementing the functions specified in the flowchart blocks.

[00278] A computer program product is therefore defined in those instances in which the computer program instructions, such as computer-readable program code portions, are stored by at least one non-transitory computer-readable storage medium with the computer program instructions, such as the computer-readable program code portions, being configured, upon execution, to perform the functions described above, such as in conjunction with the flowchart(s) of FIGs. 10 and 11. In other embodiments, the computer program instructions, such as the computer-readable program code portions, need not be stored or otherwise embodied by a non-transitory computer-readable storage medium, but may, instead, be embodied by a transitory medium with the computer program instructions, such as the computer-readable program code portions, still being configured, upon execution, to perform the functions described above.

[00279] Accordingly, blocks of the flowcharts support combinations of means for performing the specified functions and combinations of operations for performing the specified functions for performing the specified functions. It will also be understood that one or more blocks of the flowcharts, and combinations of blocks in the flowcharts, may be implemented by special purpose hardware-based computer systems which perform the specified functions, or combinations of special purpose hardware and computer instructions.

[00280] In some embodiments, certain ones of the operations above may be modified or further amplified. Furthermore, in some embodiments, additional optional operations may be included. Modifications, additions, or amplifications to the operations above may be performed in any order and in any combination.

[00281] In the above, some embodiments have been described in relation to a particular type of a parameter set (namely adaptation parameter set). It needs to be understood, however, that embodiments may be realized with any type of parameter set or other syntax structure in the bitstream. [00282] In the above, some example embodiments have been described with the help of syntax of the bitstream. It needs to be understood, however, that the corresponding structure and/or computer program may reside at the encoder for generating the bitstream and/or at the decoder for decoding the bitstream.

[00283] In the above, where example embodiments have been described with reference to an encoder, it needs to be understood that the resulting bitstream and the decoder have corresponding elements in them. Likewise, where example embodiments have been described with reference to a decoder, it needs to be understood that the encoder has structure and/or computer program for generating the bitstream to be decoded by the decoder.

[00284] Many modifications and other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these inventions pertain having the benefit of the teachings presented in the foregoing descriptions and the associated drawings. Therefore, it is to be understood that the inventions are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Moreover, although the foregoing descriptions and the associated drawings describe example embodiments in the context of certain example combinations of elements and/or functions, it should be appreciated that different combinations of elements and/or functions may be provided by alternative embodiments without departing from the scope of the appended claims. In this regard, for example, different combinations of elements and/or functions than those explicitly described above are also contemplated as may be set forth in some of the appended claims. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims. Although specific terms are employed herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

[00285] It should be understood that the foregoing description is only illustrative. Various alternatives and modifications may be devised by those skilled in the art. For example, features recited in the various dependent claims may be combined with each other in any suitable combination(s). In addition, features from different embodiments described above may be selectively combined into a new embodiment. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims.