Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND DATA PROCESSING SYSTEM FOR LOSSY IMAGE OR VIDEO ENCODING, TRANSMISSION AND DECODING
Document Type and Number:
WIPO Patent Application WO/2023/118317
Kind Code:
A1
Abstract:
A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; decoding the quantized latent using a denoising process to produce an output image, wherein the output image is an approximation of the input image.

Inventors:
ZAFAR ARSALAN (GB)
XU JAN (GB)
BESENBRUCH CHRISTIAN (GB)
ABBASI BILAL (GB)
CHERGANSKI ALEKSANDAR (GB)
FINLAY CHRIS (GB)
ETMANN CHRISTIAN (GB)
Application Number:
PCT/EP2022/087271
Publication Date:
June 29, 2023
Filing Date:
December 21, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DEEP RENDER LTD (GB)
International Classes:
H04N19/90; G06N3/0455; G06N3/047; G06N3/084
Domestic Patent References:
WO2021220008A12021-11-04
Other References:
SIWEI MA ET AL: "Image and Video Compression With Neural Networks: A Review", 10 April 2019 (2019-04-10), pages 1 - 16, XP055765818, Retrieved from the Internet [retrieved on 20210115]
CHITWAN SAHARIA ET AL: "Image Super-Resolution via Iterative Refinement", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 30 June 2021 (2021-06-30), XP081980663
RUIHAN YANG ET AL: "Lossy Image Compression with Conditional Diffusion Models", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 9 December 2022 (2022-12-09), XP091389979
Attorney, Agent or Firm:
WISE, Edward (GB)
Download PDF:
Claims:
1. A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; decoding the quantized latent using a denoising process to produce an output image, wherein the output image is an approximation of the input image.

2. The method of claim 1 , wherein the denoising process is performed by a trained denoising model.

3. The method of claim 2, wherein the trained denoising model is a second trained neural network.

4. The method of any previous claim, wherein the denoising process is an iterative process and includes a denoising function configured to predict a noise vector; wherein the denoising function receives as input an output of the previous iterative step, the data based on the latent representation and parameters describing a noise distribution; and the noise vector is applied to the output of the previous iterative step to obtain the output of the current iterative step.

5. The method of claim 4, wherein the parameters describing the noise distribution specify the variance of the noise distribution.

6. The method of claim 4 or claim 5, wherein the noise distribution is a gaussian distribution.

7. The method of any previous claim, wherein the initial input to the denoising process is sampled from gaussian noise.

8. The method of any previous claim, wherein the data based on the latent representation is upsampled prior to the application of the denoising process.

9. A method of training one or more models including neural networks, the one or more models being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a li rst neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a denoising model to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on the rate of the quantized latent; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network to update the parameters of the first neural network; repeating the above steps using a first set of training images to produce a first trained neural network.

10. The method of claim 9, wherein the loss function includes a denoising loss; and the denoising process includes a denoising function configured to predict a noise vector; wherein the denoising function receives as input the first input training image with added noise, the data based on the latent representation and parameters describing a noise distribution; the denoising loss is evaluated based on a difference between the predicted noise vector and the noise added to the first training image; back-propagation the gradient of the loss function is additionally performed through the denoising model to update the parameters of the denoising model to produce a trained denoising model.

11. The method of claim 9 or claim 10, wherein the loss function includes a distortion loss based on differences between the output image and the input training image.

12. A method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; and transmitting the quantized latent to a second computer system.

13. A method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent encoded according to the method of claim 12 at a second computer system; decoding the quantized latent using a denoising process to produce an output image, wherein the output image is an approximation of the input image.

14. A data processing system configured to perform the method of any one of claims 1 to 8.

15. A data processing apparatus configured to perform the method of claim 12 or 13.

16. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 12 or 13.

17. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of claim 12 or 13.

18. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives the output image as an input and outputs one or more values associated with one or more sub-sections of the output image, wherein each value indicates the likelihood that the corresponding sub-section of the output image is a fake sub-section, and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

19. The method of claim 18, wherein the output of the neural network acting as a discriminator is converted to a probability distribution, wherein the value of the probability distribution is defined for each of the one or more sub-sections and is proportionate to the value indicating the likelihood that the corresponding sub-section of the output image is a fake sub-section.

20. The method of claim 19, wherein the conversion to a probability distribution is performed using a softmax function.

21. The method of any one of claims 18 to 20, further including the step of providing the one or more sub-sections of the output image to a neural network acting as a sub-discriminator; wherein the neural network acting as a sub-discriminator outputs one or more values associated with the one or more sub-sections of the output image, each value indicating the likelihood that the corresponding sub-section of the output image is a fake sub-section; and the differences between the output image and the input training image is additionally determined based on the output of the neural network acting as a sub-discriminator; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a sub-discriminator.

22. The method of claim 21, wherein the one or more sub-sections of the output image are determined by sampling the probability distribution.

23. The method of claim 21 or claim 22, wherein two to five sub-sections of the output image are provided to the neural network acting as a sub-discriminator, preferably wherein three sub-sections of the output image are provided.

24. The method of any one of claims 18 to 23, wherein the neural network acting as a discriminator additionally receives the quantized latent as an input.

25. The method of claim 19 or claim 20, further comprising the steps of, after the output of the neural network acting as a discriminator is converted to a probability distribution: sampling the probability distribution to select a sub-section of the output image; encoding the corresponding sub-section of the input image to the selected sub-section of the output image using the first neural network to produce a sub-latent representation; performing a quantization process on the sub-latent representation to produce a quantized sub-latent; decoding the quantized sub-latent using a second neural network to produce an output sub-image, wherein the output sub-image is an approximation of the sub-section of the input image; wherein the evaluation of the loss function and back propagation of the gradient of the loss function to update the parameters of the neural networks is performed based on the output sub-image and the sub-section of the input image.

26. A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the first trained neural network and the second trained neural network have been trained according to the method of any one of claims 18 to 25.

27. A method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent; wherein the first trained neural network has been trained according to the method of any one of claims 18 to 25.

28. A method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent according to the method of claim 27 at a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the second trained neural network has been trained according to the method of any one of claims 1 to 8.

29. A data processing system configured to perform the method of any one of claims 18 to 26.

30. A data processing apparatus configured to perform the method of claim 27 or 28.

31. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 27 or 28.

32. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of claim 27 or 28.

33. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives an additional input; the additional input is based on the first input training image, and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

34. The method of claim 33, wherein the input training image is additionally processed by a third trained neural network; and the additional input is the output of the third trained neural network,

35. The method of claim 33 or claim 34, wherein at least one of the layers of the neural network acting as a discriminator that receives an input based on the additional input is narrow with respect to the input training image.

36. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives an additional input; the additional input is based on an output of an intermediate layer of the first neural network or the second neural network; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

37. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image and the rate of the quantized latent; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives an additional input; the additional input is the rate of the quantized latent; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

38. The method of any one of claims 33 to 37, wherein the neural network acting as a discriminator provides a first output using the input training image as an input and a second output using the output image as an input; and the additional input is used as an input when generating each of the first output and the second output.

39. The method of any one of claims 33 to 38, wherein the neural network acting as a discriminator receives an input in which the additional input is channel concatenated with the input training image or the output image.

40. The method of any one of claim 33 to 38, wherein the parameters of the neural network acting as a discriminator are determined by the output of a fourth neural network that receives the additional input as an input.

41. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a li rst neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image are determined based on the output of a plurality of neural networks each acting as a discriminator, wherein the best performing discriminator is selected to perform the determination; and back-propagation of the gradient of the loss function is additionally used to update the parameters of each of the plurality of neural networks acting as a discriminator.

42. The method of claim 41, wherein the best performing discriminator is selected based on either a minimal or maximal objective score.

43. A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the first trained neural network and the second trained neural network have been trained according to the method of any one of claims 33 to 42.

44. A method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent; wherein the first trained neural network has been trained according to the method of any one of claims 33 to 42.

45. A method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent according to the method of claim 44 at a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the second trained neural network has been trained according to the method of any one of claims 33 to 42.

46. A data processing system configured to perform the method of any one of claims 33 to 43.

47. A data processing apparatus configured to perform the method of claim 44 or 45.

48. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 44 or 45.

49. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of claim 44 or 45.

50. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving, encoding, transmitting and decoding a first input training image to produce an output image using the one or more neural networks, wherein the output image is an approximation of the input training image; updating the parameters of the one or more neural networks based on differences between the output image and the input image; and repeating the above steps using a first set of training images to produce one or more trained neural networks; wherein the differences between the output image and the input training image are determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator comprises a convolutional layer; the convolutional layer comprises a first convolutional filter, wherein the norm of the first convolutional filter is set to a predetermined value greater than or less than one; and the differences between the output image and the input training image are additionally used to update the parameters of the neural network acting as a discriminator.

51. The method of claim 50, wherein the neural network acting as a discriminator comprises a further convolutional layer comprising a second convolutional filter; wherein the norm of the second convolutional filter is set to a predetermined value greater than or less than one and different to the norm of the first convolutional filter.

52. The method of claim 50 or 51, wherein the predetermined values of the one or more convolutional filters are hyperparameters of the neural network acting as a discriminator.

53. The method of any one of claims 50 to 52, wherein the encoding of the first input training image is performed using a first neural network to produce a latent representation; a quantization process is performed on the latent representation to produce a quantized latent; and the decoding is performed by decoding the quantized latent using a second neural network to produce the output image.

54. A method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a li rst neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the loss function comprises a penalty term based on a norm associated with the neural network acting as a discriminator; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

55. The method of claim 54, wherein the norm is based on a norm of the Jacobian of the output of the neural network acting as a discriminator.

56. The method of claim 55, wherein the norm is the Frobenius norm of the Jacobian.

57. The method of any one of claims 54 to 56, wherein the neural network acting as a discriminator is a patch discriminator; and the penalty term based on a norm is based on a sum of the norms associated with each patch of the patch discriminator.

58. The method of claim 56 or 57, wherein the Frobenius norm of the Jacobian is calculated using a set of randomly sampled vectors.

59. The method of claim 58, wherein the vectors are sampled from a normal distribution.

60. The method of claims 58 or 59, wherein the number of randomly sampled vectors is 1.

61. The method of any one of claims 58 to 60, wherein the Frobenius norm of the Jacobian is calculated using the vector- Jacobian product.

62. The method of any one of claims 58 to 60, wherein the Frobenius norm of the Jacobian is calculated using a finite difference method.

63. A method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the first trained neural network and the second trained neural network have been trained according to the method of any one of claims 50 to 62.

64. A method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent; wherein the first trained neural network has been trained according to the method of any one of claims 50 to 62.

65. A method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent according to the method of claim 64 at a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the second trained neural network has been trained according to the method of any one of claims 50 to 62.

66. A data processing system configured to perform the method of any one of claims 50 to 63.

67. A data processing apparatus configured to perform the method of claim 64 or 65.

68. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 64 or 65.

69. A computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method of claim 64 or 65.

Description:
Method and data processing system for lossy image or video encoding, transmission and decoding

This invention relates to a method and system for lossy image or video encoding, transmission and decoding, a method, apparatus, computer program and computer readable storage medium for lossy image or video encoding and transmission, and a method, apparatus, computer program and computer readable storage medium for lossy image or video receipt and decoding.

There is increasing demand from users of communications networks for images and video content. Demand is increasing not just for the number of images viewed, and for the playing time of video; demand is also increasing for higher resolution content. This places increasing demand on communications networks and increases their energy use because of the larger amount of data being transmitted.

To reduce the impact of these issues, image and video content is compressed for transmission across the network. The compression of image and video content can be lossless or lossy compression. In lossless compression, the image or video is compressed such that all of the original information in the content can be recovered on decompression. However, when using lossless compression there is a limit to the reduction in data quantity that can be achieved. In lossy compression, some information is lost from the image or video during the compression process. Known compression techniques attempt to minimise the apparent loss of information by the removal of information that results in changes to the decompressed image or video that is not particularly noticeable to the human visual system.

Artificial intelligence (Al) based compression techniques achieve compression and decompression of images and videos through the use of trained neural networks in the compression and decompression process. Typically, during training of the neutral networks, the difference between the original image and video and the compressed and decompressed image and video is analyzed and the parameters of the neural networks are modified to reduce this difference while minimizing the data required to transmit the content. However, Al based compression methods may achieve poor compression results in terms of the appearance of the compressed image or video or the amount of information required to be transmitted.

According to the present invention there is provided a method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; decoding the quantized latent using a denoising process to produce an output image, wherein the output image is an approximation of the input image. The denoising process may be performed by a trained denoising model.

The trained denoising model may be a second trained neural network.

The denoising process may be an iterative process and may include a denoising function configured to predict a noise vector; wherein the denoising function receives as input an output of the previous iterative step, the data based on the latent representation and parameters describing a noise distribution; and the noise vector is applied to the output of the previous iterative step to obtain the output of the current iterative step.

The parameters describing the noise distribution may specify the variance of the noise distribution.

The noise distribution may be a gaussian distribution.

The initial input to the denoising process may be sampled from gaussian noise.

The data based on the latent representation may be upsampled prior to the application of the denoising process.

According to the present invention there is provided a method of training one or more models including neural networks, the one or more models being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a denoising model to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on the rate of the quantized latent; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network to update the parameters of the first neural network; repeating the above steps using a first set of training images to produce a first trained neural network.

The loss function may include a denoising loss; and the denoising process may include a denoising function configured to predict a noise vector; wherein the denoising function receives as input the first input training image with added noise, the data based on the latent representation and parameters describing a noise distribution; the denoising loss is evaluated based on a difference between the predicted noise vector and the noise added to the first training image; and back-propagation the gradient of the loss function is additionally performed through the denoising model to update the parameters of the denoising model to produce a trained denoising model. The loss function may include a distortion loss based on differences between the output image and the input training image.

According to the present invention there is provided a method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the input image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; and transmitting the quantized latent to a second computer system.

According to the present invention there is provided a method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent encoded according to the method for lossy image or video encoding and transmission above at a second computer system; decoding the quantized latent using a denoising process to produce an output image, wherein the output image is an approximation of the input image.

According to the present invention there is provided a data processing system configured to perform the method for lossy image or video encoding, transmission and decoding above.

According to the present invention there is provided a data processing apparatus configured to perform the method for lossy image or video encoding and transmission or the method for lossy image or video receipt and decoding above.

According to the present invention there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or the method for lossy image or video receipt and decoding above.

According to the present invention there is provided a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or the method for lossy image or video receipt and decoding above.

According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives the output image as an input and outputs one or more values associated with one or more sub-sections of the output image, wherein each value indicates the likelihood that the corresponding sub-section of the output image is a fake sub-section, and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

The output of the neural network acting as a discriminator may be converted to a probability distribution, wherein the value of the probability distribution is defined for each of the one or more sub-sections and is proportionate to the value indicating the likelihood that the corresponding sub-section of the output image is a fake sub-section.

The conversion to a probability distribution may be performed using a softmax function.

The method may further include the step of providing the one or more sub-sections of the output image to a neural network acting as a sub-discriminator; wherein the neural network acting as a sub-discriminator outputs one or more values associated with the one or more sub-sections of the output image, each value indicating the likelihood that the corresponding sub-section of the output image is a fake sub-section; and the differences between the output image and the input training image is additionally determined based on the output of the neural network acting as a sub-discriminator; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a sub-discriminator.

The one or more sub-sections of the output image may be determined by sampling the probability distribution.

Two to five sub-sections of the output image may be provided to the neural network acting as a sub-discriminator, preferably three sub-sections of the output image may be provided.

The neural network acting as a discriminator may additionally receive the quantized latent as an input.

The method may further comprise the steps of, after the output of the neural network acting as a discriminator is converted to a probability distribution: sampling the probability distribution to select a sub-section of the output image; encoding the corresponding sub-section of the input image to the selected sub-section of the output image using the first neural network to produce a sub-latent representation; performing a quantization process on the sub-latent representation to produce a quantized sub-latent; decoding the quantized sub-latent using a second neural network to produce an output sub-image, wherein the output sub-image is an approximation of the sub-section of the input image; wherein the evaluation of the loss function and back propagation of the gradient of the loss function to update the parameters of the neural networks is performed based on the output sub-image and the sub-section of the input image.

According to the present invention there is provided a method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the first trained neural network and the second trained neural network have been trained according to the method of training one or more neural networks above.

According to the present invention there is provided a method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent; wherein the first trained neural network has been trained according to the method of training one or more neural networks above.

According to the present invention there is provided a method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent according to the method of claim 10 at a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the second trained neural network has been trained according to the method of training one or more neural networks above.

According to the present invention there is provided a data processing system configured to perform the method of the method of training one or more neural networks or the method for lossy image or video encoding, transmission and decoding above.

According to the present invention there is provided a data processing apparatus configured to perform the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding described above. According to the present invention there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding described above.

According to the present invention there is provided a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding described above.

According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives an additional input; the additional input is based on the first input training image, and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

The input training image may be additionally processed by a third trained neural network; and the additional input is the output of the third trained neural network,

At least one of the layers of the neural network acting as a discriminator that receives an input based on the additional input may be narrow with respect to the input training image.

According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives an additional input; the additional input is based on an output of an intermediate layer of the first neural network or the second neural network; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image and the rate of the quantized latent; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator receives an additional input; the additional input is the rate of the quantized latent; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

The neural network acting as a discriminator may provide a first output using the input training image as an input and a second output using the output image as an input; and the additional input may be used as an input when generating each of the first output and the second output.

The neural network acting as a discriminator may receive an input in which the additional input is channel concatenated with the input training image or the output image.

The parameters of the neural network acting as a discriminator may be determined by the output of a fourth neural network that receives the additional input as an input. According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image are determined based on the output of a plurality of neural networks each acting as a discriminator, wherein the best performing discriminator is selected to perform the determination; and back-propagation of the gradient of the loss function is additionally used to update the parameters of each of the plurality of neural networks acting as a discriminator.

The best performing discriminator may be selected based on either a minimal or maximal objective score.

According to the present invention there is provided a method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the first trained neural network and the second trained neural network have been trained according to the methods of training one or more neural networks above.

According to the present invention there is provided a method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent; wherein the first trained neural network has been trained according to the methods of training one or more neural networks above. According to the present invention there is provided a method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent according to the method for lossy image or video encoding and transmission above at a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the second trained neural network has been trained according to the methods of training one or more neural networks above.

According to the present invention there is provided a data processing system configured to perform the methods of training one or more neural networks above.

According to the present invention there is provided a data processing apparatus configured to perform the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding above.

According to the present invention there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding above.

According to the present invention there is provided a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding above.

According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving, encoding, transmitting and decoding a first input training image to produce an output image using the one or more neural networks, wherein the output image is an approximation of the input training image; updating the parameters of the one or more neural networks based on differences between the output image and the input image; and repeating the above steps using a first set of training images to produce one or more trained neural networks; wherein the differences between the output image and the input training image are determined based on the output of a neural network acting as a discriminator; the neural network acting as a discriminator comprises a convolutional layer; the convolutional layer comprises a first convolutional filter, wherein the norm of the first convolutional filter is set to a predetermined value greater than or less than one; and the differences between the output image and the input training image are additionally used to update the parameters of the neural network acting as a discriminator. The neural network acting as a discriminator may comprise a further convolutional layer comprising a second convolutional filter; wherein the norm of the second convolutional filter is set to a predetermined value greater than or less than one and different to the norm of the first convolutional filter.

The predetermined values of the one or more convolutional filters may be hyperparameters of the neural network acting as a discriminator.

The encoding of the first input training image may be performed using a first neural network to produce a latent representation^ quantization process may be performed on the latent representation to produce a quantized latent; and the decoding may be performed by decoding the quantized latent using a second neural network to produce the output image.

According to the present invention there is provided a method of training one or more neural networks, the one or more neural networks being for use in lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving a first input training image; encoding the first input training image using a first neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; decoding the quantized latent using a second neural network to produce an output image, wherein the output image is an approximation of the input training image; evaluating a loss function based on differences between the output image and the input training image; evaluating a gradient of the loss function; back-propagating the gradient of the loss function through the first neural network and the second neural network to update the parameters of the first neural network and the second neural network; and repeating the above steps using a first set of training images to produce a first trained neural network and a second trained neural network; wherein the differences between the output image and the input training image is determined based on the output of a neural network acting as a discriminator; the loss function comprises a penalty term based on a norm associated with the neural network acting as a discriminator; and back-propagation of the gradient of the loss function is additionally used to update the parameters of the neural network acting as a discriminator.

The norm may be based on a norm of the Jacobian of the output of the neural network acting as a discriminator.

The norm may be the Frobenius norm of the Jacobian.

The neural network acting as a discriminator may be a patch discriminator; and the penalty term based on a norm may be based on a sum of the norms associated with each patch of the patch discriminator. The Frobenius norm of the Jacobian may be calculated using a set of randomly sampled vectors.

Te vectors may be sampled from a normal distribution.

The number of randomly sampled vectors may be 1.

The Frobenius norm of the Jacobian may be calculated using the vector- Jacobian product.

The Frobenius norm of the Jacobian may be calculated using a finite difference method.

According to the present invention there is provided a method for lossy image or video encoding, transmission and decoding, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent to a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the first trained neural network and the second trained neural network have been trained according to the methods of training one or more neural networks above.

According to the present invention there is provided a method for lossy image or video encoding and transmission, the method comprising the steps of: receiving an input image at a first computer system; encoding the first input training image using a first trained neural network to produce a latent representation; performing a quantization process on the latent representation to produce a quantized latent; transmitting the quantized latent; wherein the first trained neural network has been trained according to the methods of training one or more neural networks above.

According to the present invention there is provided a method for lossy image or video receipt and decoding, the method comprising the steps of: receiving the quantized latent according to the method for lossy image or video encoding and transmission above at a second computer system; and decoding the quantized latent using a second trained neural network to produce an output image, wherein the output image is an approximation of the input training image; wherein the second trained neural network has been trained according to the methods of training one or more neural networks above.

According to the present invention there is provided a data processing system configured to perform the methods of training one or more neural networks above. According to the present invention there is provided a data processing apparatus configured to perform the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding above.

According to the present invention there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding above.

According to the present invention there is provided a computer-readable storage medium comprising instructions which, when executed by a computer, cause the computer to carry out the method for lossy image or video encoding and transmission or for lossy image or video receipt and decoding above.

Aspects of the invention will now be described by way of examples, with reference to the following figures in which:

Figure 1 illustrates an example of an image or video compression, transmission and decompression pipeline.

Figure 2 illustrates a further example of an image or video compression, transmission and decompression pipeline including a hyper-network.

Figure 3 illustrates a pipeline for Al based compression using conditional denoising decoders (CDDs). XQ represents the image to be encoded, %o represents the reconstructed image, and y is the quantised latent space.

Figure 4 illustrates an encoding pipeline, XQ represents the image to be encoded, and y is the quantised latent space.

Figure 5 illustrates a decoding pipeline. %o represents the reconstructed image, and y is the quantised latent space.

Figure 6 illustrates an example architecture of a denosing model.

Figures 7 to 10 illustrate examples of decoded images using the CCD pipeline.

Figure 11 illustrates an example of an input image.

Figure 12 illustrates on the left an example of a discriminator applied to the example input image of figure 11, in the centre an example of the discriminator applied to a predicted image and on the right and example of a probability mass function.

Figure 13 illustrates crops taken from the example input image of figure 11. Compression processes may be applied to any form of information to reduce the amount of data, or file size, required to store that information. Image and video information is an example of information that may be compressed. The file size required to store the information, particularly during a compression process when referring to the compressed file, may be referred to as the rate. In general, compression can be lossless or lossy. In both forms of compression, the file size is reduced. However, in lossless compression, no information is lost when the information is compressed and subsequently decompressed. This means that the original file storing the information is fully reconstructed during the decompression process. In contrast to this, in lossy compression information may be lost in the compression and decompression process and the reconstructed file may differ from the original file. Image and video files containing image and video data are common targets for compression. JPEG, JPEG2000, AVC, HEVC and AVI are examples of compression processes for image and/or video files.

In a compression process involving an image, the input image may be represented as x. The data representing the image may be stored in a tensor of dimensions H x W x C, where H represents the height of the image, VK represents the width of the image and C represents the number of channels of the image. Each H x W data point of the image represents a pixel value of the image at the corresponding location. Each channel C of the image represents a different component of the image for each pixel which are combined when the image file is displayed by a device. For example, an image file may have 3 channels with the channels representing the red, green and blue component of the image respectively. In this case, the image information is stored in the RGB colour space, which may also be referred to as a model or a format. Other examples of colour spaces or formats include the CMKY and the YCbCr colour models. However, the channels of an image file are not limited to storing colour information and other information may be represented in the channels. As a video may be considered a series of images in sequence, any compression process that may be applied to an image may also be applied to a video. Each image making up a video may be referred to as a frame of the video.

The output image may differ from the input image and may be represented by x. The difference between the input image and the output image may be referred to as distortion or a difference in image quality. The distortion can be measured using any distortion function which receives the input image and the output image and provides an output which represents the difference between input image and the output image in a numerical way. An example of such a method is using the mean square error (MSE) between the pixels of the input image and the output image, but there are many other ways of measuring distortion, as will be known to the person skilled in the art. The distortion function may comprise a trained neural network.

Typically, the rate and distortion of a lossy compression process are related. An increase in the rate may result in a decrease in the distortion, and a decrease in the rate may result in an increase in the distortion. Changes to the distortion may affect the rate in a corresponding manner. A relation between these quantities for a given compression technique may be defined by a rate-distortion equation.

Al based compression processes may involve the use of neural networks. A neural network is an operation that can be performed on an input to produce an output. A neural network may be made up of a plurality of layers. The first layer of the network receives the input. One or more operations may be performed on the input by the layer to produce an output of the first layer. The output of the first layer is then passed to the next layer of the network which may perform one or more operations in a similar way. The output of the final layer is the output of the neural network.

Each layer of the neural network may be divided into nodes. Each node may receive at least part of the input from the previous layer and provide an output to one or more nodes in a subsequent layer. Each node of a layer may perform the one or more operations of the layer on at least part of the input to the layer. For example, a node may receive an input from one or more nodes of the previous layer. The one or more operations may include a convolution, a weight, a bias and an activation function. Convolution operations are used in convolutional neural networks. When a convolution operation is present, the convolution may be performed across the entire input to a layer. Alternatively, the convolution may be performed on at least part of the input to the layer.

Each of the one or more operations is defined by one or more parameters that are associated with each operation. For example, the weight operation may be defined by a weight matrix defining the weight to be applied to each input from each node in the previous layer to each node in the present layer. In this example, each of the values in the weight matrix is a parameter of the neural network. The convolution may be defined by a convolution matrix, also known as a kernel. In this example, one or more of the values in the convolution matrix may be a parameter of the neural network. The activation function may also be defined by values which may be parameters of the neural network. The parameters of the network may be varied during training of the network.

Other features of the neural network may be predetermined and therefore not varied during training of the network. For example, the number of layers of the network, the number of nodes of the network, the one or more operations performed in each layer and the connections between the layers may be predetermined and therefore fixed before the training process takes place. These features that are predetermined may be referred to as the hyperparameters of the network. These features are sometimes referred to as the architecture of the network.

To train the neural network, a training set of inputs may be used for which the expected output, sometimes referred to as the ground truth, is known. The initial parameters of the neural network are randomized and the first training input is provided to the network. The output of the network is compared to the expected output, and based on a difference between the output and the expected output the parameters of the network are varied such that the difference between the output of the network and the expected output is reduced. This process is then repeated for a plurality of training inputs to train the network. The difference between the output of the network and the expected output may be defined by a loss function. The result of the loss function may be calculated using the difference between the output of the network and the expected output to determine the gradient of the loss function. Back-propagation of the gradient descent of the loss function may be used to update the parameters of the neural network using the gradients ^^ ^^/ ^^ ^^ of the loss function. A plurality of neural networks in a system may be trained simultaneously through back-propagation of the gradient of the loss function to each network. In the case of AI based image or video compression, the loss function may be defined by the rate distortion equation. The rate distortion equation may be represented by ^^ ^^ ^^ ^^ = ^^ + ^^ ∗ ^^, where ^^ is the distortion function, ^^ is a weighting factor, and ^^ is the rate loss. ^^ may be referred to as a lagrange multiplier. The langrange multiplier provides as weight for a particular term of the loss function in relation to each other term and can be used to control which terms of the loss function are favoured when training the network. In the case of AI based image or video compression, a training set of input images may be used. An example training set of input images is the KODAK image set (for example at www.cs.albany.edu/ xypan/research/snr/Kodak.html). An example training set of input images is the IMAX image set. An example training set of input images is the Imagenet dataset (for example at www.image-net.org/download). An example training set of input images is the CLIC Training Dataset P (“professional”) and M (“mobile”) (for example at http://challenge.compression.cc/tasks/). An example of an AI based compression process 100 is shown in Figure 1. As a first step in the AI based compression process, an input image 5 is provided. The input image 5 is provided to a trained neural network 110 characterized by a function ^^ ^^ acting as an encoder. The encoder neural network 110 produces an output based on the input image. This output is referred to as a latent representation of the input image 5. In a second step, the latent representation is quantised in a quantisation process 140 characterised by the operation ^^, resulting in a quantized latent. The quantisation process transforms the continuous latent representation into a discrete quantized latent. An example of a quantization process is a rounding function. In a third step, the quantized latent is entropy encoded in an entropy encoding process 150 to produce a bitstream 130. The entropy encoding process may be for example, range or arithmetic encoding. In a fourth step, the bitstream 130 may be transmitted across a communication network. In a fifth step, the bitstream is entropy decoded in an entropy decoding process 160. The quantized latent is provided to another trained neural network 120 characterized by a function ^^ ^^ acting as a decoder, which decodes the quantized latent. The trained neural network 120 produces an output based on the quantized latent. The output may be the output image of the AI based compression process 100. The encoder-decoder system may be referred to as an autoencoder. The system described above may be distributed across multiple locations and/or devices. For example, the encoder 110 may be located on a device such as a laptop computer, desktop computer, smart phone or server. The decoder 120 may be located on a separate device which may be referred to as a recipient device. The system used to encode, transmit and decode the input image 5 to obtain the output image 6 may be referred to as a compression pipeline. The AI based compression process may further comprise a hyper-network 105 for the transmission of meta-information that improves the compression process. The hyper-network 105 comprises a trained neural network 115 acting as a hyper-encoder and a trained neural network 125 acting as a hyper-decoder ^ . An example of such a system is shown in Figure 2. Components of the system not further discussed may be assumed to be the same as discussed above. The neural network 115 acting as a hyper-decoder receives the latent that is the output of the encoder 110. The hyper-encoder 115 produces an output based on the latent representation that may be referred to as a hyper-latent representation. The hyper-latent is then quantized in a quantization process 145 characterised by ^^ to produce a quantized hyper-latent. The quantization process 145 characterised by ^^ may be the same as the quantisation process 140 characterised by ^^ discussed above. In a similar manner as discussed above for the quantized latent, the quantized hyper-latent is then entropy encoded in an entropy encoding process 155 to produce a bitstream 135. The bitstream 135 may be entropy decoded in an entropy decoding process 165 to retrieve the quantized hyper-latent. The quantized hyper-latent is then used as an input to trained neural network 125 acting as a hyper-decoder. However, in contrast to the compression pipeline 100, the output of the hyper-decoder may not be an approximation of the input to the hyper-decoder 115. Instead, the output of the hyper-decoder is used to provide parameters for use in the entropy encoding process 150 and entropy decoding process 160 in the main compression process 100. For example, the output of the hyper-decoder 125 can include one or more of the mean, standard deviation, variance or any other parameter used to describe a probability model for the entropy encoding process 150 and entropy decoding process 160 of the latent representation. In the example shown in Figure 2, only a single entropy decoding process 165 and hyper-decoder 125 is shown for simplicity. However, in practice, as the decompression process usually takes place on a separate device, duplicates of these processes will be present on the device used for encoding to provide the parameters to be used in the entropy encoding process 150.

Further transformations may be applied to at least one of the latent and the hyper-latent at any stage in the Al based compression process 100. For example, at least one of the latent and the hyper latent may be converted to a residual value before the entropy encoding process 150,155 is performed. The residual value may be determined by subtracting the mean value of the distribution of latents or hyper-latents from each latent or hyper latent. The residual values may also be normalised.

To perform training of the Al based compression process described above, a training set of input images may be used as described above. During the training process, the parameters of both the encoder 110 and the decoder 120 may be simultaneously updated in each training step. If a hyper-network 105 is also present, the parameters of both the hyper-encoder 115 and the hyper-decoder 125 may additionally be simultaneously updated in each training step

The training process may further include a generative adversarial network (GAN). When applied to an Al based compression process, in addition to the compression pipeline described above, an additional neutral network acting as a discriminator is included in the system. The discriminator receives an input and outputs a score based on the input providing an indication of whether the discriminator considers the input to be ground truth or fake. For example, the indicator may be a score, with a high score associated with a ground truth input and a low score associated with a fake input. For training of a discriminator, a loss function is used that maximizes the difference in the output indication between an input ground truth and input fake.

When a GAN is incorporated into the training of the compression process, the output image 6 may be provided to the discriminator. The output of the discriminator may then be used in the loss function of the compression process as a measure of the distortion of the compression process. Alternatively, the discriminator may receive both the input image 5 and the output image 6 and the difference in output indication may then be used in the loss function of the compression process as a measure of the distortion of the compression process. Training of the neural network acting as a discriminator and the other neutral networks in the compression process may be performed simultaneously. During use of the trained compression pipeline for the compression and transmission of images or video, the discriminator neural network is removed from the system and the output of the compression pipeline is the output image 6.

Incorporation of a GAN into the training process may cause the decoder 120 to perform hallucination. Hallucination is the process of adding information in the output image 6 that was not present in the input image 5. In an example, hallucination may add fine detail to the output image 6 that was not present in the input image 5 or received by the decoder 120. The hallucination performed may be based on information in the quantized latent received by decoder 120.

As discussed above, a video is made up of a series of images arranged in sequential order. Al based compression process 100 described above may be applied multiple times to perform compression, transmission and decompression of a video. For example, each frame of the video may be compressed, transmitted and decompressed individually. The received frames may then be grouped to obtain the original video.

A number of concepts related to the Al compression processes discussed above will now be described. Although each concept is described separately, one or more of the concepts described below may be applied in an Al based compression process as described above.

Diffusion decoders

Diffusion models are a class of generative model, where in the training process, we incrementally add noise to a sample/image, and learn a function (the denoising function), that learns to remove this noise. In the reverse/generative process, we denoise that sample, starting from a sample of a standard normal. Some aspects of diffusion models will not be discussed in detail, such as the forward process or the sampling process, as these are explained in "Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239, 2020" and "Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. arXiv preprint arXiv:2104.07636, 2021 " which are hereby incorporated by reference, The application of diffusion models to an Al based compression pipeline as discussed above is set out below.

The decoder in the encoder-decoder pipeline as discussed above may be replaced with a conditional diffusion decoder (CDD). An example of a CDD is described in "Chitwan Saharia, Jonathan Ho, William Chan, Tim Salimans, David J Fleet, and Mohammad Norouzi. Image super-resolution via iterative refinement. arXiv preprint arXiv:2104.07636, 2021". The aim of the CCD when applied in an Al based compression pipeline is to reconstruct the input image given the quantized latents over some number of timesteps 7 , starting from a random sample. The random sample may be a sample from a standard normal. When applied in an Al based compression pipeline, the random sample may be additionally conditioned with the received latent representation. This is done through iteratively removing noise from the previous sample x t to get Xt-i until we reach XQ, which is our image to be decoded. In an example, the initial input to the CCD is a sample from a standard normal, which may be further conditioned with the latent representation. The latent representation may be upsampled prior to being used to condition the CCD.

Until the decoder, the architecture of the system may be the same as the Al compression pipeline discussed above. There are no limitations on the entropy module or the addition of hyper and context modules to the entropy module. After the latent is quantised, the architecture is different. In the first layer of the decoder, the CCD, we upsample (Nearest neighbour) our quantised latent space to the image scale as our conditional diffusion decoder (CDD) operates in the image resolution. This upsampled quantised latent is then used to condition the CCD noise input x t . An example architecture is shown in Figure 6.

The training function may have two components. The first is the standard rate loss as discussed above, and the second is a loss for the denoising function, called the denoising loss. The aim of the rate is to minimise the number of bits required to encode y, and the aim of the denoising loss is to learn a function that can predict the noise that was added to a sample. The training or loss function may additionally include a distortion loss as discussed above. In the case where the distortion loss is not used, the gradients used to update the parameters of the encoder now come from the denoising loss. This provides the denosing function with an informative conditioned latent to reconstruct ˆ 0 . Algorithm 1 shows an example of the training process in detail and Figure 3 shows an example of the entire pipeline. Algorithm 1 Example algorithm for a single training step for a conditional diffusion decoder for compression. ^^ 0 The encoding process may be the same as discussed above, but the trained parameters of the encoder will differ due to the inclusion of the CDD as a decoder. Figure 4 shows the encoding process. For decoding, after the ^ˆ^ is recovered from the bitstream using arithmetic decoding, we sample an ^^ from a standard normal for the first T, ^^ ^^ , and perform the computation of the reverse pass as shown in Algorithm 2 for ^^ number of steps to get out final output , where each step gets us closer to ^^ = 0. Figure 5 also shows this process (without the iterative structure). Furthermore, some decoded images using the CDD method are also shown in Figure 7 to Figure 10. We note that this model did not have the optional distortion penalty applied to it.

Resampling patches

Despite the outstanding success of neural image compression models, the rate-distortion loss may not always be enough to effectively reconstruct regions of particular modes, such as faces and texts. To this end, adding a generative adversarial loss to the distortion term has been shown to work well in improving the reconstruction of these modes. However, there is still a prevalent failure in learning to reconstruct these modes to a suitable degree. Discussed below is a method which may take advantage of the saliency learned by the discriminator that detects patches of the image comprising some of these difficult modes. The traditional training regime may be augmented by using the resulting saliency map to crop the difficult-to-learn parts of an image and iterate over them. The crop of an image may be referred to as a sub-image. A patch of the image may be made up of one or more sub-images. The saliency of a region or sub-image of an image may be defined as the level of importance or prominence of that region or sub-image compared to other region or sub-images of the image to a human observer.

We briefly describe the loss function which may be used in our pipeline in further detail. Let x denote the input image to be compressed, y the quantized latent produced by our encoder, and x the reconstruction. The rate-distortion (RD) loss for our neural image compression pipeline can be expressed as: where MSE is the pixel-wise mean-squared error and LPIPS is a feature-based loss using a pretrained classification network.

We may add a non- saturating generative loss to act as a a perceptual loss, resulting in the following rate-distortion-perception (RDP) loss: where the discriminator D is neural network trained to assess whether patches from the image are sampled from the real distribution or the generated distribution by assigning a patch a value between 0 (fake) and 1 (real). D may be trained in tandem with the generator in a minimax game using a binary cross -entropy loss.

Conditioning the discriminator on the latents may allow the discriminator to identify modes that are difficult to reproduce by the standard RD loss. An example of this process is shown in figures 11 to 13. Figure 11 shows an example image. Figure 12 shows the output of discriminator that has been applied to the example image and a corresponding predicted image (respectively, left and center) together with a corresponding probability mass function of the discriminator output for the predicted image (right). We see that the discriminator can identify edges and textured regions by labelling them as "fake" in the generated images. The discriminator identifies the textured regions of one of the hands to be fake, as well as the text in the top image. The probability mass function assigns these areas higher probability, so that they are more likely to be drawn from during the resampling procedure, as seen in the crops taken from the image in figure 13.

We may further emphasize these regions by simply taking a weighted softmax of the discriminator output, as follows: where y > 0 is a tuning parameter that controls how much emphasis placed on fake patches (as determined by the discriminator). The resulting /C is a probability mass function defined over a grid the same size as the output of the discriminator. Sampling from this probability mass function results in sampling patches from the input of the discriminator. Therefore, by increasing y, one can sample fake patches with much higher probability. The output of the discriminator may be downsampled compared to the input image or output image which may be used as an input to the discriminator. A single pixel of the output of the discriminator may correspond to a larger area of the input or output image.

We outline the Crop Resampling method in Algorithm 3. The method crops difficult patches of an image and iterates over them. We use the probability mass function given by Equation (3) to sample patches from the original and predicted image to iterate our training step over again. A plurality of patches may be sampled. The number of patches may be between two and five, preferably three. A separate discriminator may be used for the crops given that the image resolutions are different. This discriminator may be referred to as a sub-discriminator. The latent that is used to condition the crop discriminator may be cropped over the same region, after being upsampled and masked with a convolutional layer. We outline Discriminator-based Data Augmentation method in Algorithm 4. This method may replace the standard random or center crop used for image augmentation with a cropping mechanism similar to that for Crop Resampling but applied to the original image size. We may use the probability mass function given by Equation (3) to identify parts of the unaugmented image that might be difficult for the generator to reproduce. We extract these parts in a crop of the original image of the desired size. The crop of the original image is then used for the training of the Al-based compression pipeline instead of or in addition to the original image. During the sampling process, we may refrain from storing the computation graph of the resulting operations from the discriminator and generator. The cropping method described fits naturally into the dataloaders framework used by modern deep learning libraries such as

Py Torch, but we explicitly include it in the training loop for clarity.

Adaptive Discriminators As discussed above, neural image and video compression pipelines are optimised for both rate and distortion loss. The distortion loss can consist of different sub-losses, such as MSE and LPIPS. One of the more important distortion losses may be the adversarial loss (with a discriminator), which, in contrast to the previous two, does not represent a distance w.r.t. the ground truth image, but rather is a loss based solely on the likelihood that the observed image is real, i.e. has not been distorted by the compression pipeline. If the observed image is a reconstruction from a pipeline and thus has been distorted, it may be referred to as a a fake image. Adversarial losses are heavily present in other machine learning problems (e.g. generative modelling). However, in generative modelling the fake and real images are two distinct datasets that don’t have any correspondence. In contrast, in compression every original image has its compressed counterpart. Therefore, there is a bijection between the two sets. This allows for adaptive discriminators that may take additional information or input into account on top of the image being discriminated. The correspondence between the real and fake images in compression allows for more advanced applications of the discriminator. One example of this is the conditional discriminator (e.g. used in Fabian Mentzer, George Toderici, Michael Tschannen, and Eirikur Agustsson. High-fidelity generative image compression. arXiv preprint arXiv:2006.09965, 2020, which is hereby incorporated by reference, where the discriminator is further conditioned on the quantised latent representation of the compressed image). The main motivation for adapting the discriminator is the large distribution of images. Some of the image may be easy to compress, while others may be hard. Therefore a different discriminator may be assigned to different images. This may be considered the use of adaptive discriminators. For example, it might be futile to apply a very complex critic to an image that already has high distortion, since the compression pipeline won’t be able to account for the gradients it receives. For the purpose of describing the examples of this method, ^^ ^^ may denote a compression pipeline such as that described above, where ^^ indicates the set of parameters. ^^ ^ ^ ^ ^ ^^ ^^ , ^^ ^^ ^^ ^^ ^^ ^^ ^ ^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ ^^ and ^^ ^^ ^^ ^ ^ ^^ ^ ^ ^ ^ ^ ^ may denote respectively the encoder, quantiser and decoder of our network and ^^ d e the image we are encoding. So, where y denotes the latent representation of x, y denotes the quantised version of y and x denotes the reconstruction of x. The pipeline is the composition of the encoder, quantiser and decoder:

Let’s also denote by the discriminator with parameters if . The first approach of adapting the discriminator is based on conditioning on deep features. As we have mentioned in the introduction the conventional way to apply the discriminator in generative modelling is only on the real image x and fake image x.

However, we can also adapt the discriminator by conditioning it on other variables produced by the pipeline f 0 , e.g. intermediate ones. For instance in Fabian Mentzer, George Toderici, Michael Tschannen, and Eirikur Agustsson. High-fidelity generative image compression. arXiv preprint arXiv:2006.09965, 2020, the discriminator is conditioned on the quantised latent y:

However, the discriminator may be conditioned on any additional input variable z produced by the compression pipeline. A number of examples of additional inputs to the discriminator are provided below.

• Original image conditioning

The discriminator could be conditioned on the original image x, by setting z = x. This is image adaptive discriminator. There is one potential problem, because in Preal = hp^x) the discriminator could in theory detect that the discriminated image is the same as the conditioned image and thus learn to output a probability of 1 in these cases. However, this issue may be minimized by conditioning on x where the image passes through a narrow section of the discriminator network that removes enough information.

• Rate conditioning

Based on the latent representation and the learnt entropy parameters 0 e ntropy the compression pipeline also calculates the second objective, namely the rate loss (denoted by R). We can condition the discriminator on R (i.e. z = R) so that the discriminator will take into account the difficulty of compressing the original image. In this way, the discriminator can be more strict or complex for images with good reconstructions and more lenient for images with bad reconstruction.

• Latent space conditioning

3'enc and jec may denote the intermediate variables produced in at least one of the intermediate layers, respectively by the encoder and decoder of our pipeline. As an example, we can condition the discriminator on any subset of enc and ^c-

• Other deep feature conditioning

Let g^ denote another fixed neural network for feature extraction (not our compression pipeline). andxf eat are the features produced by this network. We can condition on z = Xfeat- The fixed neural network may have been trained for a different task that is not related to the compression of images. For example, the fixed neural network may have been trained for image classification.

The concept may also be applied in the context of the compression of videos. In any case where an image is used as an input to the discriminator, a plurality of frames of a video may instead be used as an input, for example where the plurality of frames are concatenated in a plurality of channels of the input. This concept may also be applied at intermediate stages of the pipeline, for example a plurality of variables based on the plurality of frames from an intermediate step may be used as an input. Inputs from a plurality of rates based one of more of the plurality of frames may also be used.

Below is provided examples of how the conditioning on z could be implemented: • Channel concatenation

Let A™ 8 and A“ nd denote respectively the parts of the discriminator that are applied on x/x and z and let denote by the final part of the discriminator. Let’s also denote by • || • concatenation along the channel dimension. In this example the conditioning may be performed as follows:

• Kernel prediction

Here, a kernel prediction network (meta network) /i^"' is used to predict the kernels (i.e. parameters) another network A dlscr . So the overall discriminator is the following: equivalently

In the different types of conditioning variables z and types of architectures of conditioning described above the variable z could optionally be passed through a nogradients function that prevents the tracking of gradients and treats z as a constant. This operation is executed before being forwarded to the discriminator. If nogradients is used z will be reassigned as follows:

An alternative way to make the discriminator adapt to the current image being compressed x is by using a set of discriminators. In contrast to the previous approach here we choose the most appropriate discriminator, not by conditioning, but by selecting the best discriminator from a set. Let G denote our fixed set of K discriminators with adjustable parameters: where ^^ 1 , ^^ 2 , ... , ^^ ^^ are the respective parameters. These discriminators do not necessarily have the same architecture. E.g. they all could have different architectures, or some of the architectures might coincide and some might not. Let’s assume ^^ is an image from our training dataset. For each discriminator ℎ ^^ ^^ , we can calculate the perceptual probabilities of ^^ and ˆ: ^ ^ ^^ 22) 3) Based on these two probabi be calculated for the current discriminator as follows: Then the optimal discriminator minimal or maximal objective sc ore from the scores of all disciminators: ^^∗ ^^ ^^ ^^ 5) Once the optimal e adversarial loss based on ℎ and use it to update the weights of the pipeline ^^ ^^ (a.k.a. the generator). For each of our discriminators from ^^ we can calculate the discriminator loss w.r.t. the current image ^^ and use it to update their respective weights. In the described way, every discriminator is trained based on loss from all the images, but the generator loss for individual images consists of the loss from one discriminator only, namely the optimal discriminator ℎ ^^∗ for the current image ^^ w.r.t. the objective function ^^. An example of an objective function ^^ (·, ·) is provided below. Let’s define that the ideal ^^ real should be as close as possible to 0.5 + ^^ and the ideal ^^ fake should be as close as possible to 0.5 − ^^, where ^^ ∈ [0, 0.5] is defined as the desired saturation level. Let ^^ (·, ·) define any distance, e.g. ^^ 1 or ^^ 2 . We can define ^^ as the average distance of ^^ real and ^^ fake to the desired saturation level:

In algorithm 5 we present an example of an algorithm that implements the described procedure for adaptive discriminators with a set of discriminators.

Discriminator Regularisation As discussed above, all lossy compression algorithms aim to strike a balance between visual fidelity, the distortion, and the file size, expressed via the rate. In neural compression algorithms, this rate-distortion-tradeoff is governed by the specific choice of training procedure, neural network architecture, regularisation etc. Recently, a third axis along which there exists a tradeoff has come into focus, that of perception. This tradeoff is hence known as the distortion-perception-tradeoff. For for a given image, distortion measures how closely the compressed image matches the uncompressed image. On the other hand, perception quantifies how likely it is, that the compressed image comes from the distribution of uncompressed images. While this distinction is subtle, this has some concrete practical implications, as a reference (uncompressed) image is generally not needed to assess the perception. Rather, in the context of compression, perception measures how ’natural’ a compressed image seems to the human observer. This in particular includes the presence of compression artefacts, which are not desirable to the human observer. In practice, the main challenge is designing (or learning) perception measuring functions that model the human visual system (HVS) faithfully. This is because the HVS does not generally align too well with simplistic distortion measures such as euclidean distances. As such, perception can be seen as an instance of visual loss. One extremely powerful method for learning such perception measuring functions is via the idea of adversarial learning, an idea pioneered by generative adversarial networks (GANs). Adversarial learning consists of two ’competing’ networks, in our case the compression decoder and a so-called discriminator network. Here, the two networks are trained in an alternating fashion, where the discriminator’s goal is to distinguish uncompressed from compressed images by providing a score from 0 (fake/compressed) to 1 (real/uncompressed). The decoder’s goal on the other hand is to ’fool’ the discriminator by creating more and more ’realistic’ images. Hence, the discriminator can be viewed as a ’teacher’ for the ’decoder’. Here, the capabilities of the discriminator for a should be tuned towards the decoder’s capabilities: If a discriminator succeeds at distinguishing compressed from uncompressed images, is this because the discriminator is good or or is this because the decoder is bad? The goal in this framework is to obtain a decoder that is as good as possible. Due to limitations such as the number of parameters, the depth of the network etc. (which are results of hardware and runtime side-constraints), there is an upper limit of the performance of the decoder. In practice, it is hence important to design a discriminator that is only powerful enough to reach an equilibrium with the best possible performance of this decoder. In everyday terms, this would be akin to a teacher that challenges the student at just the difficulty level that the student is.

In GAN training (and in particular Wasserstein GAN) training, the spectral norm of each convolutional layer of the discriminator is set to 1. This ensures that the Lipschitz constant of the network (measured in terms of the 2-norm both in input and output space) can’t be too large. In particular, if ReLU or leaky ReLU activation functions are used, this leads to the Lipschitz constant of the whole discriminator being at most 1. This can be viewed as a sort of regularisation, which limits the expressiveness of the discriminator and can be used to tune the discriminator’s capabilities towards an equilibrium with the generator’s capabilities.

The above value of 1 is, however, quite arbitrary. For different depths and parameter counts of the discriminator and generator, a different amount of regularisation can be applied. In this context, one can, after normalising to 1, multiply the convolution’s kernel by an arbitrary positive number, i.e. where x is an input feature, K is the normalised kernel and c e (0, oo). The arbitrary number may be greater than or less than 1. This means that the thus scaled convolution operator x i— > convG. c ■ K) has Lipschitz constant c, which can be tuned arbitrarily by the user and set at a predetermined value. The arbitrary number may be set at different predetermined values for different biters of the discriminator.

While spectral normalisation gives exact guarantees (the global Lipschitz constant is upper bounded), gradient penalties, like the popular R penalties allow for a more flexible control of the local Lipschitz constant. The training of the thus-regularised network can thus potentially assign larger Lipschitz constants where necessary. If a discriminator is described by the scalar-valued function ^^ : R N → R, these ^^ 1 -penalties are realised via adding ^^∥∇ ^^ ( ^^)∥ 2 2 (or using some other norm) to the loss function, for some penalty param t r ^^ ∈ (0,∞) and discriminator input ^^. This can be used in a straightforward mann er or standard, non-patch discriminators. In compression however,5 patch-based discriminators are used, which can be described by functions ^^ : R N → R ^^ with multi-dimensional output. In these cases, generalisations with feasible training times ∑ may be beneficial. If one simply penalises ^^ ^^ 2 (where ^^ ^^ denotes the ^^-th patch’s output function), this increases the comput ti l nd spatial complexity by a factor of (roughly) , making training with this penalty more fficult. 10 Note that ^^ ^^ , the squared Frobenius norm of the Jacobian of ^^ , denoted ^^ ^^ Note that ∥ ^^ ^^ 2 ^ ^ = E ^^∼N(0, ^^ ^^ ) [∥ ^^ ^^ · ^^∥ 2 2], meaning that one can ap m by drawing some number of multivariate ( ^^-dimension al) standard normal vectors and averaging the Jacobian-vector product. Note further that ∥ ^^ ^^ ^^ = ∥ ^^ ^ ^ ^ ^ ^^ , i.e. the Frobenius norms of the Jacobian and transpose of the Jacobian coincide, meaning that one can instead approximate ∥ ^^ ^^ 2 ^ ^ = E ^^∼N(0, ^^ ^^ ) [∥ ^^ ^ ^ ^ ^ · ^^∥ 2 2] . ^^ ^ ^ ^ ^ · ^^ is known as the vec e computed using reverse-mode- autodifferentiation. This is in practice about twice as fast as the computation of the Jacobian-vector-product. In practice, one can thus penalise 1 ∑ ^^ ^ ^ 2 ^ ^^∥ ^^ ^^ ≈ ^^ ∥ ^^ ^^ · ^^ ( ^^) 2 , ^^ ^^ 2 where ^^ ( ^^) ∼ N(0, ^^ ^^ ) are stributed vectors. The number of samples can be as low as ^^ = 1. However, the accuracy of the approximation may increase if more samples are used. Another way of approximating ∥ ^^ ^^ 2 ^ ^ is via [ ] 1 ∥ ^^ ^^ 2 ^ ^ ≈ E ^^ 2 2 0, ^^ ^^ ^^ ) (∥ ^^ ( ^^ + ^^)∥ − ∥ ^^ ( ^^)∥ ) , ^^ 2 2 2 where a e (0, oo). This estimate tends to become more accurate for smaller a. It can, however, be also used with larger values of a, which takes into account the nonlinear structure of f.

In practice, one can use where y^ ~ /V(0, IM) are randomly-drawn normally-distributed vectors. The number of samples can be as low as n = 1. However, the accuracy of the approximation may increase if more samples are used.