Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
A FILTER
Document Type and Number:
WIPO Patent Application WO/2019/219879
Kind Code:
A1
Abstract:
The present invention provides a method of controlling a filter for one or more portions of an image, the method comprising controlling filtering on first component samples of the one or more portions of the image based on sample values of a second component of the image.

Inventors:
GISQUET CHRISTOPHE (FR)
ONNO PATRICE (FR)
TAQUET JONATHAN (FR)
LAROCHE GUILLAUME (FR)
Application Number:
PCT/EP2019/062740
Publication Date:
November 21, 2019
Filing Date:
May 16, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
CANON KK (JP)
CANON EUROPE LTD (GB)
International Classes:
H04N19/117; H04N19/14; H04N19/176; H04N19/186
Foreign References:
US20150181211A12015-06-25
US20150016550A12015-01-15
Other References:
VAN DER AUWERA G ET AL: "Deblocking Filter Simplifications", 98. MPEG MEETING; 28-11-2011 - 2-12-2011; GENEVA; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11),, no. m21852, 18 November 2011 (2011-11-18), XP030050415
Attorney, Agent or Firm:
CANON EUROPE LIMITED (GB)
Download PDF:
Claims:
CLAIMS:

1. A method of controlling a filter for one or more portions of an image, the method comprising controlling filtering on first component samples of the one or more portions of the image based on sample values of a second component of the image.

2. The method of claim 1, wherein the controlling is based on a variation among three or more second component sample values.

3. The method of claim 2, wherein the controlling comprises determining whether to use the filter on the first component samples based on the variation.

4. The method of claim 2 or 3, wherein:

the sample values are those of second component samples from two or more second component image portions adjacent to a boundary; and

the variation is based on a measure obtained from the samples values.

5. The method of claim 4, wherein the second component samples pL are from a second component block P and qL are from another second component block Q for J=0..2 and i={0,3}, and the determining comprises comparing the measure with a threshold, said threshold being dependent on quantization parameters for the second component blocks P and

Q.

6. The method of claim 4 or 5, wherein either or both of the measure or the threshold is dependent on one or more sample values of the first component samples or quantization parameters for use with the first component samples.

7. The method of any preceding claim, wherein the controlling comprises determining, based on at least one sample value of the second component, or the variation if dependent on claim 2, a filtering parameter for use with the filter.

8. The method of any preceding claim, wherein the sample values of the second component samples of the image are reconstructed second component sample values.

9. The method of any preceding claim, wherein, for the one or more portions of the image, a number of the corresponding first component samples is less than a number of corresponding second component samples.

10. The method of any preceding claim, wherein the first component samples are chroma samples and the second component samples are luma samples of the one or more portions of the image, and the filter is a chroma deblocking filter for use on the one or more portions of an image.

11. A method of processing one or more portions of an image, an image portion having a chroma portion comprising chroma samples associated with the image portion and a luma portion comprising luma samples associated with the same image portion, wherein the method comprises determining, based on a plurality of the luma samples in the luma portion, at least one of:

whether to use or not use a filter on a boundary of the chroma portion; enabling or disabling use of the filter on the boundary of the chroma portion; a filtering parameter for use with the filter when filtering the boundary of the chroma portion; or a measure of a variation among a plurality of the chroma samples in the chroma portion.

12. A method of controlling a deblocking filter for chroma samples of one or more portions of an image, the method comprising controlling the deblocking filter based on one or more parameters for use with a luma deblocking filter, wherein the luma deblocking filter is HE VC compliant.

13. A method of encoding an image, the method comprising, for one or more portions of the image, controlling a filter according to the method of any one of claims 1 to 10, processing according to the method of claim 11 , or controlling a deblocking filter according to the method of claim 12.

14. The method of claim 13 further comprising:

receiving an image;

encoding the received image to produce a bitstream; and

processing the encoded image, wherein the processing comprises the controlling according to the method of any one of claims 1 to 10, the processing according to the method of claim 11, or the controlling according to the method of claim 12.

15. A method of decoding an image, the method comprising, for one or more portions of the image, controlling a filter according to the method of any one of claims 1 to 10, processing according to the method of claim 11 , or controlling a deblocking filter according to the method of claim 12.

16. The method of claim 15 further comprising:

receiving a bitstream;

decoding the received bitstream to obtain an image; and

processing the obtained image, wherein the processing comprises the controlling according to the method of any one of claims 1 to 10, the processing according to the method of claim 11, or the controlling according to the method of claim 12.

17. A device for controlling a filter for one or more portions of an image, the device comprising a controller configured to control filtering on first component samples of the one or more portions of the image based on sample values of a second component of the image.

18. The device of claim 17, wherein the controller is configured to perform the method of any one of claims 1 to 10, the method of claim 11, or the method of claim 12.

19. A device for encoding an image, the device comprising the control device of claim 17 or 18.

20. The device of claim 19, the device configured to perform the method of claim 13 or 14.

21. A device for decoding an image, the device comprising the control device of claim 17 or 18.

22. The device of claim 21, the device configured to perform the method of claim 15 or 16.

23. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of any one of claims 1 to 10, 11, 12, 13 to 14, or 15 to 16.

24. A computer-readable storage medium storing a computer program which, when executed causes the method of any one of claims 1 to 10, 11, 12, 13 to 14, or 15 to 16 to be performed.

25. A signal carrying an information dataset for an image encoded using the method of claim 13 or 14 and represented by a bitstream, the image comprising a set of reconstructable samples, each reconstructable sample having a sample value, wherein the information dataset comprises control data for controlling filtering on first component samples based on sample values of second component samples of the reconstructable samples.

Description:
A FILTER

FIELD

[0001] The present invention relates to encoding or decoding of blocks of a video

component. Embodiments of the invention find particular, but not exclusive, use when controlling a filter for filtering samples of such component.

BACKGROUND

[0002] When blocks of a video component are encoded or decoded, often filtering is applied to reduce blocking artefacts, e.g. visible discontinuities occurring at (prediction or transform) block boundaries in a reconstructed signal of the video component. Such blocking artefacts usually occur when a block based (motion) prediction is used with transform coding.

[0003] So a deblocking filter (also known as an in-loop filter) is used to filter (e.g. remove or reduce effects of) these blocking artefacts, for example in video coding standards such as H.264 and HEVC (High Efficiency Video Coding or H.265). For example, to encode or decode blocks of chroma (chrominance) component, HEVC uses a simple chroma deblocking filter based on a Chroma control parameter selection method, which uses a parameter tc, determined based on a chroma quantisation parameter Q as specified in the HEVC standard document Recommendation ITU-T H.265 (ITU-T H.265(l2/20l6)). By comparison, HEVC uses a more complex luma deblocking filter for encoding or decoding blocks of luma component, for example based on a Luma control parameter selection method which uses two parameters b and tc, both determined based on a luma quantisation parameter Q as specified in the HEVC standard document Recommendation ITU-T H.265 (ITU-T H.265(12/2016)).

[0004] Such differences in the chroma and luma deblocking filters mean a relatively/comparatively coarse filtering (i.e. filtering at a low resolution or in less detail) is applied to blocks of chroma component samples whilst a finer filtering (i.e. filtering at a higher resolution or in more detail) is applied to blocks of luma component samples. In order to make chroma deblocking filtering finer, use of the same Luma control parameter selection method, but evaluation of the control parameters being based on only the Chroma sample values, for the Chroma deblocking filter has been considered. However, this lead to a significant increase in complexity with very little, if at all, improvement in the effectiveness (or resolution) of the chroma deblocking filtering. Moreover, this was only applied to some colour formats.

SUMMARY OF THE INVENTION

[0005] It is an aim of embodiments of the present invention to address one or more problems or disadvantages of the foregoing encoding or decoding of blocks of a video component.

[0006] According to aspects of the present invention there are provided an apparatus, a method, a program, a computer readable storage medium, and a signal as set forth in the appended claims. According to other aspects of the invention, there are provided a system, a method for controlling such a system, an apparatus for performing the method as set forth in the appended claims, an apparatus for processing, an apparatus for controlling a deblocking filter, a media storage device storing a signal as set forth in the appended claim, a computer readable storage medium or a non-transitory computer-readable storage medium storing the program as set forth in the appended claims, and a bitstream generated using an encoding method as set forth in the appended claims. Other features of the invention will be apparent from the dependent claims, and the description which follows.

[0007] According to a first aspect of the present invention, there is provided a method of controlling a filter for one or more portions of an image, the method comprising controlling filtering on first component samples of the one or more portions of the image based on sample values of a second component of the image. Suitably, the first component samples are from two or more first component image portions comprising a first component image portion on one side of a first boundary and another first component image portion on the other side of the first boundary, and the sample values are those of second component samples from one or more second component image portions adjacent to a corresponding second boundary. Suitably, the first component is chroma and the second component is luma. Suitably, the first component image portion is a chroma sample block and the second component image portion is a luma sample block. Suitably, the chroma sample block (or chroma samples or first component samples) and the luma sample block (or luma samples or second component samples) correspond to/are associated or collocated with the same group/block/unit of pixels/elements of the image. Suitably, the first boundary and the second boundary are at the same position and/or correspond to/are associated or collocated with the same pixels of the image portion. Suitably, the controlling is based on a variation among three or more second component sample values. Suitably, the variation is based on a measure of variation. Suitably, the controlling is performed for a first component block of the first component samples based on the variation among the three or more second component sample values from at least two neighbouring blocks of the second component samples. Suitably, the at least two neighbouring blocks of the second component samples comprise corresponding blocks of the first component block and its adjacent block. Suitably, a corresponding block is a collocated block (i.e. at the corresponding position (and of the same size)) of a first component block of the first component samples. Suitably, the corresponding block and the first component block are associated with the same block of pixels/elements of the image. Suitably, the controlling comprises determining whether to use the filter on the first component samples based on the variation. Suitably, the sample values are those of second component samples from two or more second component image portions adjacent to a boundary; and the variation is based on a measure obtained from the samples values. Suitably, the variation is based on a measure d; and d =

| p2o - 2plo + pOo | + | p2 3 - 2pl 3 + p0 3 | + | q2o - 2qlo + qOo | + | q2 3 - 2ql 3 + q0 3 1 for second component samples pk and qk with J=0..2 and i={0,3} . Suitably, the second component samples pJi are from a second component block P and qJi are from another second component block Q for J=0..2 and i= (0,3 } , and the determining comprises comparing the measure d with a threshold, said threshold being dependent on quantization parameters for the second component blocks P and Q. Suitably, either or both of the measure d or the threshold is independent of sample values of the first component samples or quantization parameters for use with the first component samples. Suitably, the controlling or the determining whether to use the filter is independent of a filtering decision made for filtering the second component samples. Suitably, the controlling or the determining whether to use the filter for the first component samples is dependent on a first process/condition/determination based on the variation, and the measure d or the threshold, the filtering decision made for the second component samples is dependent on a second process/condition/determination based on a (or the) variation, a (or the) measure thereof or a (or the) threshold, wherein the first and second processes/conditions/determinations are different from one another. Suitably, the filtering decision made for the second component samples relates to at least one of: whether to use or not use a filter for the second component samples; enabling or disabling use of the filter for the second component samples; or a filtering parameter for use with the filter when filtering the second component samples. Alternatively, either or both of the measure d or the threshold is also dependent on one or more of: sample values of the first component samples, coding parameters or quantization parameters for use with the first component samples. Suitably, the measure indicates a variation/smoothness/flatness across at least some of the second component sample values (e.g. the luma samples) of at least two neighbouring blocks and/or across a boundary between the at least two neighbouring blocks, and the filtering is used on a corresponding boundary between at least two neighbouring blocks of the first component samples (e.g. the chroma samples). Suitably, the controlling comprises either enabling or disabling use of the filtering based on a condition dependent on the measure. Suitably, the controlling comprises: obtaining the measure; and determining based on the obtained measure. Suitably, the determining based on the obtained/calculated measure comprises: comparing the obtained/calculated measure with a threshold (e.g. a threshold based on a function of any combination of one or more of sample values, coding parameters or quantization parameters such as bS, b, QP (or Q or Qp or qP), pJi, & qJi evaluated based on either or both of luma and chroma samples); and determining, based on this comparison, the use of the filtering on the first component samples (e.g. the chroma samples), and/or if it is determined to use the filtering, a filtering parameter for use with the filtering. Suitably, the controlling comprises determining, based on at least one sample value of the second component, or the variation if dependent on claim 2, a filtering parameter for use with the filter. Suitably, the filtering parameter is for controlling at least one of: a filter strength; a filter type; or a filter range. Suitably, the sample values of the second component samples of the image are reconstructed second component sample values. Suitably, sample values of the first component samples are reconstructed first component sample values. Suitably, a first component sample value and the associated reconstructed second component sample value are associated with each other through a preset relationship. Suitably, the preset relationship is that they correspond to, or are associated or collocated with, each other. This correspondence, or association or collocation, relationship may be defined for each sample value individually, or between a block/group/unit of first component sample values and a block/group/unit of second component sample values. Suitably, the preset relationship is that they are associated with at least one pixel/element of a current block of pixels/elements to be processed, for example they correspond to, or are associated or collocated with, sample values of the at least one pixel/element. This correspondence, or association or collocation, relationship may be defined for each sample value individually, or between a block/group/unit of sample values to a block/group/unit of pixels/elements. It is also understood that a down-sampling or an up- sampling process may be applied to a block of first component samples or second component samples so that the preset relationship between the blocks, or with the at least one pixel/element of a current block of pixels/elements, can be established after the down-sampling/up-sampling. Suitably, the first component samples and the associated second component samples are associated with a block of pixels/elements of the same image or image portion, or frame, that is to be processed. Alternatively, the first component samples and the associated second component samples are associated with a block of pixels/elements of different images or image portions, or frames, as long as the reconstructed sample values are available for use when processing the first component samples. Suitably, for the one or more portions of the image, a number of the corresponding first component samples is less than a number of corresponding second component samples. Suitably, the one or more portions of the image is processable in a format such that the number of the corresponding first component samples is less than the number of corresponding second component samples. Suitably, the image or the image portion is processed in a color format of 4:2:0. Suitably, the first component samples are chroma samples and the second component samples are luma samples of the one or more portions of the image, and the filter is a chroma deblocking filter for use on the one or more portions of an image. Suitably, the image or image portion has been chroma subsampled. Suitably, controlling based on the sample values of a second component of the image comprises controlling the chroma deblocking filter based on one or more parameters for use with a luma deblocking filter. Suitably, the luma deblocking filter is HEVC compliant.

[0008] According to a second aspect of the present invention, there is provided a method of processing one or more portions of an image, an image portion having a chroma portion comprising chroma samples associated with the image portion and a luma portion comprising luma samples associated with the same image portion, wherein the method comprises determining, based on a plurality of the luma samples in the luma portion, at least one of: whether to use or not use a filter on a boundary of the chroma portion; enabling or disabling use of the filter on the boundary of the chroma portion; a filtering parameter for use with the filter when filtering the boundary of the chroma portion; or (a predictor/estimate for) a measure of a variation among a plurality of the chroma samples in the chroma portion. Suitably, the image portion has been chroma subsampled to obtain the chroma portion. Suitably, the determining is independent of a filtering decision made for filtering the luma samples or the luma portion. Suitably, the determining is dependent on a first process/condition/determination based on a variation among a plurality of the luma samples, a measure of the variation among the plurality of the luma samples or a threshold based on the plurality of the luma samples, and the filtering decision made for the luma samples or the luma portion is dependent on a second process/condition/determination based on a (or the) variation, a (or the) measure thereof or a (or the) threshold based on a (or the) plurality of luma samples, wherein the first and second processes/conditions/determinations are different from one another. Suitably, the filtering decision made for the luma samples or the luma portion relates to at least one of: whether to use or not use a filter for the luma samples/luma portion; enabling or disabling use of the filter for the luma samples/luma portion; or a filtering parameter for use with the filter when filtering the luma samples/luma portion. Suitably, the determining is also based on one or more chroma samples or coding information/parameters such as motion, coding type or quantization parameters for use with the chroma samples.

[0009] According to a third aspect of the present invention, there is provided a method of controlling a deblocking filter for chroma samples of one or more portions of an image, the method comprising controlling the deblocking filter based on one or more parameters for use with a luma deblocking filter, wherein the luma deblocking filter is HE VC compliant. Suitably, the controlling is independent of a filtering decision made for the luma deblocking filter. Suitably, the controlling is dependent on a first process/condition/determination, and the filtering decision made for the luma deblocking filter is dependent on a second process /condition/determination, wherein the first and second processes/conditions/determinations are different from one another. Suitably, the filtering decision made for the luma deblocking filter relates to at least one of: whether to use or not use the luma deblocking filter; enabling or disabling use of the luma deblocking filter; or a filtering parameter for use with the luma deblocking filter. Suitably, the controlling is also based on one or more chroma samples or coding information/parameters such as motion, coding type or quantization parameters for use with the chroma samples.

[0010] According to a fourth aspect of the present invention, there is provided a method of encoding an image, the method comprising, for one or more portions of the image, controlling a filter according to the first aspect, processing according to the second aspect, or controlling a deblocking filter according to the third aspect of the invention. Suitably, the method further comprises: receiving an image; encoding the received image to produce a bitstream; and processing the encoded image, wherein the processing comprises the controlling according to the first aspect, the processing according to the second aspect, or the controlling according to the third aspect of the invention. Suitably, the receiving forms part of receiving a sequence of images, the encoding forms part of encoding the received sequence of images to produce a bitstream, and the processing forms part of processing at least one encoded image to generate a reconstructed image usable as a reference image for the encoding of another image. Suitably, the method further comprises providing, in the bitstream, a flag or data indicative of at least one of: whether to use or not use a filtering; enabling or disabling of the filtering; identifies of corresponding first and second component samples; a position of a boundary based on which the filtering is being considered; a filtering parameter for use with the filtering; or a measure of variation.

[0011] According to a fifth aspect of the present invention, there is provided a method of decoding an image, the method comprising, for one or more portions of the image, controlling a filter according to the first aspect, processing according to the second aspect, or controlling a deblocking filter according to the third aspect of the invention. Suitably, the method further comprises: receiving a bitstream; decoding the received bitstream to obtain an image; and processing the obtained image, wherein the processing comprises the controlling according to the first aspect, the processing according to the second aspect, or the controlling according to the third aspect of the invention. Suitably, the image forms a part of a sequence of images, the decoding forms part of decoding the received bitstream to obtain the sequence of images, and the processing forms part of processing at least one image to generate a reconstructed image usable as a reference image for the decoding of another image. Suitably, the method further comprises: obtaining, from the bitstream, a flag or data indicative of at least one of: whether to use or not use a filtering; enabling or disabling of the filtering; identities/positions/indices (indexes) of corresponding first and second component samples; a position of a boundary based on which the filtering is being considered; a filtering parameter for use with the filtering; or a measure of variation; and using the obtained data to perform the processing comprising the controlling according to the first aspect, the processing according to the second aspect, or the controlling according to the third aspect of the invention.

[0012] According to a sixth aspect of the present invention, there is provided a device for controlling a filter for one or more portions of an image, the device comprising a controller configured to control filtering on first component samples of the one or more portions of the image based on sample values of a second component of the image. Suitably, the controller is configured to perform the method according to the first aspect, the second aspect or the third aspect of the invention.

[0013] According to a seventh aspect of the present invention, there is provided a device for processing one or more portions of an image, an image portion having a chroma portion comprising chroma samples associated with the image portion and a luma portion comprising luma samples associated with the same image portion, wherein the device comprises a determination means for determining, based on a plurality of the luma samples in the luma portion, at least one of: whether to use or not use a filter on a boundary of the chroma portion; enabling or disabling use of the filter on the boundary of the chroma portion; a filtering parameter for use with the filter when filtering the boundary of the chroma portion; or a measure of a variation among a plurality of the chroma samples in the chroma portion. Suitably, the image portion has been chroma subsampled to obtain the chroma portion.

[0014] According to an eighth aspect of the present invention, there is provided a device for controlling a deblocking filter for chroma samples of one or more portions of an image, the device comprising a controller configured to control the deblocking filter based on one or more parameters for use with a luma deblocking filter, wherein the luma deblocking filter is HE VC compliant.

[0015] According to a ninth aspect of the present invention, there is provided a device for encoding an image, the device comprising the device according to the sixth aspect, seventh aspect or the eighth aspect. Suitably, the device further comprises a receiver configured to receive an image, an encoder configured to encode the received image to produce a bitstream, and a processor configured to process the encoded image, wherein the processor is configured to perform the processing comprising the method according to the first aspect, the second aspect or the third aspect of the invention. Suitably, the device is configured to perform the method according to the fourth aspect. According to a tenth aspect of the present invention, there is provided a device for decoding an image, the device comprising the device according to the sixth aspect, seventh aspect or the eighth aspect. Suitably, the device further comprises a receiver configured to receive a bitstream, a decoder configured to decode the received bitstream to obtain an image, and a processor configured to process the obtained image, wherein the processor is configured to perform the processing comprising the method according to the first aspect, the second aspect or the third aspect of the invention. Suitably, the device is configured to perform the method according to the fifth aspect.

[0016] According to an eleventh aspect of the present invention, there is provided a method of providing an image, the method comprising: storing encoded data of an image encoded using the encoding method according to the fourth aspect; providing information regarding the stored encoded data; and when the image is requested, providing the stored encoded data. Suitably, the providing the stored encoded data comprises directly or indirectly streaming the encoded data. According to a twelfth aspect of the present invention, there is provided a system for providing an image, the system comprising: a device according to any one of the sixth to tenth aspect; a storage configured to store encoded data of the image from or for the device according to any one of the sixth to tenth aspect; and a provision means configured to provide information regarding the stored encoded data and, when the image is requested, provide the stored encoded data. Suitably, the providing the stored encoded data comprises directly or indirectly streaming the encoded data.

[0017] According to a thirteenth aspect of the present invention, there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method according to any one of the first, second, third, fourth, and fifth aspect of the invention. According to a fourteenth aspect of the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed causes the method according to any one of the first, second, third, fourth, and fifth aspect of the invention to be performed. According to a fifteenth aspect of the present invention, there is provided a signal or a carrier (wave) carrying an information dataset for an image encoded using the method according to the fourth aspect and represented by a bitstream, the image comprising a set of reconstructable samples, each reconstructable sample having a sample value, wherein the information dataset comprises control data for controlling filtering on first component samples based on sample values of second component samples of the reconstructable samples. According to a sixteenth aspect of the present invention, there is provided a media storage device storing the signal or the carrier (wave) according to the fifteenth aspect. According to a seventeenth aspect of the present invention, there is provided a non-transitory computer-readable storage medium storing a computer program which, when executed causes the method according to any one of the first, second, third, fourth, and fifth aspect of the invention to be performed. According to an eighteenth aspect of the present invention, there is provided a circuitry, or one or more processor(s) and a memory, configured to perform the method according to any one of the first, second, third, fourth, and fifth aspect of the invention. According to a nineteenth aspect of the invention, there is provided a bitstream generated using the encoding method according to the fourth aspect of the invention.

[0018] According to a twentieth aspect of the present invention, there is provided a method of processing one or more portions of an image, an image portion having a chroma portion comprising chroma samples associated with the image portion and a luma portion comprising luma samples associated with the same image portion, wherein the method comprises: controlling a filter for a boundary of the chroma portion so that chroma samples which are adjacent to the boundary as well as at least one chroma sample not adjacent to the boundary are filterable. Suitably, the image portion has been chroma subsampled to obtain the chroma portion. Suitably, the controlling comprises determining whether one or more of the filterable chroma samples are filterable by another filter for another boundary, and if so, disabling one or more of: use of either, or both of, the filter or the other filter on the one or more filterable chroma sample; use of the other filter on the other boundary; or use of the filter on the boundary. Suitably, the controlling comprises determining whether one or more of the filterable chroma samples is adjacent to another boundary, and if so, disabling use of the filter on one or more of: the one or more chroma sample adjacent to the other boundary; the other boundary; or the boundary. Suitably, a chroma portion comprises two or more chroma groups/blocks/units of chroma samples with the boundary there-between. Suitably, a range of samples filterable by the filter is defined to be less than (or equal) to half the number of samples in one dimension (e.g. a length or width) of the chroma group/block/unit (adjacent to the boundary). Suitably, luma chroma portion comprises two or more luma groups/blocks/units of luma samples with a boundary there-between. Suitably, the method further comprises controlling a luma filter for a boundary of the luma portion so that luma samples are filterable by the luma filter, wherein: when the filterable chroma samples are associated with the filterable luma samples or when the luma portion boundary is associated with the chroma portion boundary, the controlling the filter for the boundary of the chroma portion is based on a parameter for use with the controlling the luma filter. Suitably, the controlling the filter comprises: determining whether the chroma portion boundary is associated with the luma portion boundary; and if so, controlling the filter for the chroma portion boundary based on a plurality of the luma samples or the parameter for use with the controlling the luma filter; or if not, controlling the filter for the chroma portion boundary based only on chroma samples, or independent of the plurality of luma samples or the parameter for use with the controlling the luma filter. Suitably, the chroma samples (or the chroma portion) and the luma samples (or the luma portion) correspond to/are associated or collocated with the same group/block/unit of pixels/elements of the image or the image portion. Suitably, the chroma portion boundary and the luma portion boundary are at the same position and/or correspond to/are associated or collocated with the same pixels of the image or the image portion. Suitably, the controlling the filter comprises determining, based on a plurality of the luma samples in the luma portion, at least one of: whether to use or not use the filter on the boundary of the chroma portion; enabling or disabling use of the filter on the boundary of the chroma portion; a filtering parameter for use with the filter when filtering the boundary of the chroma portion; or (a predictor/estimate for) a measure of a variation among a plurality of the chroma samples in the chroma portion. Suitably, the determining is based on a filtering decision made for controlling or use of the luma filter on one or more luma boundary(ies). Alternatively, the determining is independent of a filtering decision made for controlling the luma filter. Suitably, the determining is dependent on a first process/condition/determination based on luma samples, and the filtering decision made for the luma filter is dependent on a second process/condition/determination based on luma samples, wherein the first and second processes/conditions/determinations are different from one another. Suitably, the filtering decision made for the luma filter relates to at least one of: whether to use or not use the luma filter for the luma samples/luma portion; enabling or disabling use of the luma filter for the luma samples/luma portion; or a filtering parameter for use with the luma filter when filtering the luma samples/luma portion. Alternatively, the determining is also based on one or more chroma samples or quantization parameters for use with the chroma samples. Suitably, the luma portion boundary is filterable using a HEVC or H.264 compliant luma deblocking filter. Suitably, the luma filter is a HEVC or H.264 compliant luma deblocking filter. Suitably, the filter is applied with filtering parameters that are independent of chroma samples more than two samples away from the boundary. Suitably, when a corresponding boundary of the luma portion corresponds to another boundary of the chroma portion, the filter is applied with filtering parameters that are independent of chroma samples more than a predetermined number of samples away from the boundary. Suitably, the predetermined number is based on a number of luma samples corresponding to one chroma sample, e.g. the predetermined number represents a difference in resolution/density/size between the chroma and luma portion (e.g. a ratio of luma samples to chroma samples). Suitably, the predetermined number is two (e.g. in the case of a 4:2:0 YUV colour format), and the filtering parameters are independent of chroma samples more than two samples away from the boundary. Suitably, the corresponding boundary of the luma portion partially overlaps either, or both, the boundary and the other boundary of the chroma portion. Suitably, the filter is applied with filtering parameters that are independent of a filterable chroma sample not adjacent to the boundary. Suitably, when said filterable chroma sample not adjacent to the boundary is filterable by (or is to be filtered using) a filter for filtering a different boundary, the filter is applied to the chroma sample with filtering parameters that are independent of the filterable chroma sample. Suitably, the filtering parameter is for controlling at least one of: a filter strength; a filter type; or a filter range. Suitably, the filter is applied with filtering parameters that are independent of chroma samples that are further away from the boundary than the furthest sample among the one or more samples not adjacent to the boundary. Suitably, a first chroma sample on one side of the boundary is filtered independently from a second chroma sample on another side of the boundary. Suitably, the controlling the filter comprises determining: whether to use or not use the filter on the first chroma sample independently of whether to use or not use the filter on the second chroma sample; or enabling or disabling use of the filter on the first chroma sample independently of enabling or disabling use of the filter on the second chroma sample. Suitably, a first filtering parameter(s) for use with filtering the first chroma sample is/are different from a second filtering parameter(s) for use with filtering the second chroma sample. Suitably, the independent filtering is based on a type of a chroma block or unit to which the first or second chroma sample belongs. Suitably, if the type of the chroma block/unit is inter coded portion/block/unit/tile/slice (i.e. the chroma block/unit is inter coded), a first process/determination takes place, and if not, a second process/determination takes place, wherein the first and second processes/determinations are different. Suitably, if the type of the chroma block/unit is intra coded portion/block/unit/tile/slice (i.e. the chroma block/unit is intra coded), a first process/determination takes place, and if not, a second process/determination takes place, wherein the first and second processes/determinations are different. Suitably, the first process/determination comprises: determining whether the chroma portion boundary has an associated boundary in the luma portion; and if so, controlling the filter for the chroma block/unit/samples based on a plurality of the luma samples in the luma portion or a parameter for use with filtering the associated luma portion boundary; or if not, controlling the filter for the chroma block/unit/samples based only on chroma samples, or independent of the plurality of luma samples in the luma portion or the parameter for use with the filtering the associated luma portion boundary. Suitably, the independent filtering is based on a size of a chroma block or unit to which the first or second chroma sample belongs. Suitably, if the size of the chroma block/unit in one dimension (e.g. a number of samples in a length or width direction) is less than two times a number of filterable chroma samples in a filter range of the filter (i.e. twice the number of chroma samples that are filterable by, or within a filtering range of, the filter), a first process/determination takes place, and if not, a second process/determination takes place, wherein the first and second process/determinations are different. Suitably, the first process/determination comprises determining to not use or disable the filtering for the chroma block/unit, or not use or disable the filtering for the filter range number of samples furthest from the boundary. Suitably, the second process/determination comprise determining to use or enable the filtering for the chroma block/unit or the first/second chroma sample. Suitably, the filter is applied with filtering parameters determined based on a quantization parameter Q (or QP or Qp or qP) and one or more of: pk a sample value of a chroma sample J samples away from the boundary on one side of the boundary so that pOi is a nearest chroma sample; and qk a sample value of a chroma sample J samples away from the boundary on another side of the boundary so that qOi is a nearest chroma sample. Suitably, filtering parameters are determined based on:

D = Clip3( -tc, tc, ( ( ( ( q0i - p0i ) « 2 ) + pli - qli + 4 ) » 3 ) ); D r = D»1; D q = A»l; pli' = CliplC( pli + D p ); and qlf = CliplC( qli - D q ),

wherein: » is a bitwise right shift operator and « is a bitwise left shift operator; Clip3(min, max, val) is a function where if val<min then Clip3(min, max, val) = min, if val>max then Clip3(min, max, val) = max, and otherwise Clip3(min, max, val) = val; CliplC(val) = Clip3(minC, maxC, val), and minC and maxC are a minimum and maximum chroma sample value; tc is a parameter determined based on a quantization parameter Q; pk is a sample value of a chroma sample J samples away from the boundary on one side of the boundary so that pOi is a nearest chroma sample; and qJi is a sample value of a chroma sample J samples away from the boundary on another side of the boundary so that qOi is a nearest chroma sample. Suitably, the quantization parameter Q (or QP or Qp) is that of luma portion or chroma portion, the filtering parameter is determined using a variable tc, and tc is determined based on a mapping function between Q and tc. Suitably, the mapping function is based on a table shown below:

[0019] Suitably, the image portion is processed in a colour format of 4:2:0.

[0020] According to a twenty- first aspect of the present invention, there is provided a method of controlling a filter for one or more portions of an image, luma samples of the one or more image portions being filterable by a HEVC compliant luma deblocking filter, the method comprising controlling a filter for a boundary among chroma samples of the one or more image portions so that chroma samples which are adjacent to the boundary as well as at least one chroma sample not adjacent to the boundary are filterable. Suitably, the controlling is independent of a filtering decision made for the HEVC compliant luma deblocking filter. Suitably, the controlling is dependent on a first process/condition/ determination, and the filtering decision made for the luma deblocking filter is dependent on a second process /condition/determination, wherein the first and second processes/conditions/ determinations are different from one another. Suitably, the filtering decision made for the luma deblocking filter relates to at least one of: whether to use or not use the luma deblocking filter; enabling or disabling use of the luma deblocking filter; or a filtering parameter for use with the luma deblocking filter. Suitably, the controlling is also based on one or more chroma samples or quantization parameters for use with the chroma samples. Suitably, when the filterable chroma samples are associated with the filterable luma samples, the controlling the filter for the boundary among the chroma samples is based on a plurality of the filterable luma samples or a parameter for use with controlling the HEVC compliant luma deblocking filter. Suitably, the controlling the filter comprises: determining whether the chroma boundary is associated with a boundary among luma samples of the one or more image portions (i.e. a luma boundary); and if so, controlling the filter based on the plurality of the luma samples or the parameter for use with the controlling the HEVC compliant luma deblocking filter; or if not, controlling the filter based only on chroma samples, or independent of the plurality of luma samples or the parameter for use with the controlling the HEVC compliant luma deblocking filter. Suitably, the chroma samples (or a chroma portion) and the luma samples (or a luma portion) correspond to/are associated or collocated with the same group/block/unit of pixels/elements of the image or the image portion. Suitably, the chroma boundary and the luma boundary are at the same position and/or correspond to/are associated or collocated with the same pixels of the image or the image portion. Suitably, the controlling the filter comprises determining, based on a plurality of the luma samples, at least one of: whether to use or not use the filter; enabling or disabling use of the filter; a filtering parameter for use with the filter when filtering the chroma boundary; or (a predictor/estimate for) a measure of a variation among a plurality of the chroma samples. Suitably, the determining is independent of a filtering decision made for the HEVC compliant luma deblocking filter. Suitably, the determining is dependent on a first process/condition/determination based on luma samples, and the filtering decision made for the HEVC compliant luma deblocking filter is dependent on a second process/condition/determination based on luma samples, wherein the first and second processes/conditions/determinations are different from one another. Suitably, the filtering decision made for the HEVC compliant luma deblocking filter relates to at least one of: whether to use or not use the HEVC compliant luma deblocking filter for the luma samples; or enabling or disabling use of the HEVC compliant luma deblocking filter for the luma samples. Suitably, the determining is also based on one or more chroma samples or quantization parameters for use with the chroma samples.

[0021] According to a twenty-second aspect of the present invention, there is provided a method of encoding an image, the method comprising, for one or more portions of the image, processing according to the twentieth aspect of the present invention, or controlling a filter according to the twenty- first aspect of the present invention. Suitably, the method further comprises: receiving an image; encoding the received image to produce a bitstream; and processing the encoded image, wherein the processing comprises the processing according the twentieth aspect, or the controlling according to the twenty- first aspect of the invention.

[0022] According to a twenty-third aspect of the present invention, there is provided a method of decoding an image, the method comprising, for one or more portions of the image, processing according to the twentieth aspect , or controlling according to the twenty-first aspect of the invention. Suitably, the method further comprises: receiving a bitstream; decoding the received bitstream to obtain an image; and processing the obtained image, wherein the processing comprises processing according to the twentieth aspect , or controlling according to the twenty- first aspect of the invention.

[0023] According to a twenty- fourth aspect of the present invention, there is provided a device for processing one or more portions of an image, an image portion having a chroma portion comprising chroma samples associated with the image portion and a luma portion comprising luma samples associated with the same image portion, the device comprising a controller configured to control a filter for a boundary of the chroma portion so that chroma samples which are adjacent to the boundary as well as at least one chroma sample not adjacent to the boundary are filterable. Suitably, the image portion has been chroma subsampled to obtain the chroma portion. Suitably, the controller is configured to perform the method according to the twentieth aspect , or the twenty- first aspect of the invention.

[0024] According to a twenty- fifth aspect of the present invention, there is provided a device for controlling a filter for one or more portions of an image, luma samples of the one or more image portions being filterable by a HEVC compliant luma deblocking filter, the device comprising a controller configured to control a filter for a boundary among chroma samples of the one or more image portions so that chroma samples which are adjacent to the boundary as well as at least one chroma sample not adjacent to the boundary are filterable.

[0025] According to a twenty-sixth aspect of the present invention, there is provided a device for encoding an image, the device comprising the device according to the twenty- fourth aspect or the twenty- fifth aspect of the invention. Suitably, the device is configured to perform the method according to the twenty-second aspect of the invention.

According to a twenty-seventh aspect of the present invention, there is provided a device for decoding an image, the device comprising the device according to the twenty-fourth aspect or the twenty-fifth aspect of the invention. Suitably, the device is configured to perform the method according to the twenty-third aspect of the invention.

[0026] According to a twenty-eighth aspect of the present invention, there is provided a method of providing an image, the method comprising: storing encoded data of an image encoded using the encoding method according to the twenty-second aspect ; providing information regarding the stored encoded data; and when the image is requested, providing the stored encoded data. Suitably, the providing the stored encoded data comprises directly or indirectly streaming the encoded data. According to a twenty-ninth aspect of the present invention, there is provided a system for providing an image, the system comprising: a device according to any one of the twenty-fourth to twenty-seventh aspect; a storage configured to store encoded data of the image from or for the device according to any one of the twenty- fourth to twenty-seventh aspect; and a provision means configured to provide information regarding the stored encoded data and, when the image is requested, provide the stored encoded data. Suitably, the providing the stored encoded data comprises directly or indirectly streaming the encoded data.

[0027] According to a thirtieth aspect of the present invention, there is provided a computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method according to any one of the twentieth aspect, the twenty- first aspect, the twenty-second aspect, or the twenty-third aspect of the invention. According to a thirty- first aspect of the present invention, there is provided a computer-readable storage medium storing a computer program which, when executed causes the method according to any one of the twentieth aspect, the twenty- first aspect, the twenty-second aspect, or the twenty-third aspect of the invention to be performed. According to a thirty-second aspect of the present invention, there is provided a signal or a carrier (wave) carrying an information dataset for an image encoded using the method according to the twenty-second aspect of the invention and represented by a bitstream, an image portion of the image having a chroma portion comprising chroma samples associated with the image portion and a luma portion comprising luma samples associated with the same image portion, wherein the information dataset comprises control data for controlling a filter for a boundary of a chroma portion so that chroma samples which are adjacent to the boundary as well as at least one chroma sample not adjacent to the boundary are filterable. Suitably, the image portion has been chroma subsampled to obtain the chroma portion.

[0028] According to a thirty-third aspect of the present invention, there is provided a media storage device storing the signal or the carrier (wave) according to the thirty-second aspect. According to a thirty- fourth aspect of the present invention, there is provided a non-transitory computer-readable storage medium storing a computer program which, when executed causes the method according to any one of the twentieth aspect, the twenty-first aspect, the twenty- second aspect, or the twenty-third aspect of the invention. According to a thirty- fifth aspect of the present invention, there is provided a circuitry, or one or more processor(s) and a memory, configured to perform the method according to any one of the twentieth aspect, the twenty- first aspect, the twenty-second aspect, or the twenty-third aspect of the invention.

[0029] According to a thirty-sixth aspect of the invention, there is provided a bitstream generated using the encoding method according to the twenty-second aspect of the invention.

[0030] Further features, aspects, and advantages of the present invention will become apparent from the following description of embodiments with reference to the attached drawings. Each of the embodiments of the present invention described below can be implemented solely or as a combination of a plurality of the embodiments. Also, features from different embodiments can be combined where necessary or where the combination of elements or features from individual embodiments in a single embodiment is beneficial.

BRIEF DESCRIPTION OF THE DRAWINGS

[0031] Figures 1A-1C show block diagrams illustrating a decoder, an encoder or a system comprising an encoder and/or a decoder according to embodiments of the present invention;

[0032] Figure 2 shows a vertical boundary between two blocks;

[0033] Figure 3 shows a block diagram illustrating a luma deblocking filter used in HEVC;

[0034] Figures 4A & 4B show block diagrams illustrating deblocking filters according to embodiments of the invention;

[0035] Figure 5A shows splitting of an (digital) image into blocks of pixels in HEVC;

[0036] Figure 5B & 5C show splitting of an (digital) image based on a QuadTree plus Binary Tree (QTBT) being considered by JVET (Joint Video Exploration Team on Future Video Coding (FVC) or Versatile Video coding (VVC) of ITU-T VCEG and ISO/IEC MPEG);

[0037] Figure 5D shows different types of boundaries in HEVC and in QTBT based partitioning;

[0038] Figures 6 shows a block diagram illustrating a deblocking filter with a luma sample based measure for a chroma filter on/off control according to an embodiment of the invention;

[0039] Figure 7 shows a block diagram illustrating a deblocking filter with a luma sample based measure for a chroma filter type control according to an embodiment of the invention;

[0040] Figure 8 shows a block diagram illustrating a deblocking filter with a luma sample based measure for controlling a chroma filter according to an embodiment of the invention;

[0041] Figure 9A shows filtering disabled and enabled boundaries according to an embodiment of the invention;

[0042] Figure 9B shows different types of boundaries and how they are processed according an embodiment of the invention; and

[0043] Figure 10 shows an illustrative environment according to an embodiment of the invention.

DESCRIPTION OF THE EMBODIMENTS

[0044] Embodiments of the present invention will be described hereinafter in detail, with reference to the accompanying drawings. It is to be understood that the following embodiments are not intended to limit the claims of the present invention, and that not all of the combinations of the aspects that are described according to the following embodiments are necessarily required with respect to the means to solve the problems according to the present invention.

[0045] In order to illustrate how the present invention may be put into effect, embodiments described herein are based on how encoding and decoding processes are performed according to the HEVC. However, the present invention is not limited thereto. It is understood that other embodiments of the present invention may be based on any process or device that involves processing of blocks of a video component. For example, a deblocking filter according an embodiment of the present invention may be used in any video encoding or decoding process or device, such as a future video coding standard compliant device, wherein blocks of a first component corresponding to an image portion are processed after processing at least a part, or all, of blocks of a second component corresponding to the same image portion.

[0046] [Encoder/Decoder/System embodiments]

[0047] Figure 1A shows a (video) decoder according to an embodiment of the present invention, illustrated as a block diagram of a decoder 100 which may be used to receive data from an encoder according to an embodiment of the invention. The decoder is represented by connected (functional) modules (or units), each module being configured to implement, for example in the form of programming instructions to be executed by a circuit (or a circuitry) or processor (such as a CPU) of a device, a corresponding step of a method implemented by the decoder 100. The decoder 100 receives a bitstream 101 comprising data such as encoding units, each encoding unit comprising a header containing information on encoding (coding) or decoding parameters and a body containing the encoded video data. Said encoded video data is entropy encoded, and other data for encoding (coding) or decoding the encoded video data may also be encoded, e.g. motion vector predictors’ indexes for a given block and/or data indicative of an outcome from a filter control determination. The received entropy encoded video data is decoded by an (entropic) decoding module 102. The residual data is then dequantized by a dequantization module 103 and then an inverse transform is performed on it by an inverse transform module 104 to obtain pixel values.

[0048] It is understood that a pixel corresponds to an element of an image that typically comprises several components, for example a red component, a green component, and a blue component (or chroma components and a luma component). A sample or an image sample is an element of an image that comprises only one such component.

[0049] The mode data indicating a coding mode may also be entropy decoded, and based on the mode, an INTRA type decoding or an INTER type decoding is performed on the encoded blocks of image data. In the case of INTRA mode, an INTRA predictor is determined by an intra prediction module 105 based on the intra prediction mode specified in the bitstream. If the mode is INTER mode, motion prediction information is extracted from the bitstream so as to find a reference area/ffame/image/picture used by the encoder. The motion prediction information comprises a reference frame/image/picture index and a motion vector residual. The motion vector predictor is added to the motion vector residual in order to obtain the motion vector by a motion vector (Mv) decoding module 110. The motion vector decoding module 110 performs motion vector decoding for each current block (the block currently being processed or decoded) encoded by the motion prediction. Once an index of a motion vector predictor, for use in processing the current block, has been obtained, the actual value of the motion vector associated with the current block can be decoded and used to perform a motion compensation by a motion compensation module 106. A reference image/picture portion indicated by the decoded motion vector is extracted from a reference image/picture 108 to perform the motion compensation. Motion vector field data 111 is updated with the decoded motion vector in order for it to be used for a prediction of other or subsequent motion vectors to be decoded.

[0050] Once a decoded block has been obtained this way, post filtering is performed by a post filtering module 107. For example, post filtering techniques such as a Sample Adaptive Offset (SAO) and/or deblocking filtering according to the present invention are performed. After the post filtering has been performed, a decoded video signal 109 is provided (output) by the decoder 100.

[0051] It should be noted that usually, at least some modules of the decoder 100 are also present in a corresponding encoder. For example, the post-filtering module 107, or the dequantization module 103 or the inverse transform module 104 may be present in the encoder so that the corresponding encoder is able to obtain/determine data available to the decoder 100, whereby this obtained/determined data (e.g. a reconstructed image data) can be used for prediction with reference images/pictures or entropy.

[0052] Figure IB shows an encoder according to an embodiment of the present invention, illustrated as a block diagram of an encoder 150 which corresponds to the decoder 100 of Figure 1A. The encoder 150 is represented by connected (functional) modules/units, each module being adapted to implement, for example in the form of programming instructions to be executed by a circuit (or a circuitry) or a processor (such as a CPU) of a device, a corresponding step of a method implemented by the encoder 100. The (video) encoder 150 receives/obtains an original sequence 151 (for example, an image or a plurality of images), which is divided into blocks of pixels 152 called coding blocks or coding units in HEVC (it is understood that such a block of pixels is a group or set of pixels that are associated with each other for a functional purpose such as processing, encoding or decoding purposes). A process based on a coding mode is then performed on each block. For example, there are two types of coding modes typically used in video coding: coding modes based on spatial prediction (“INTRA modes” or Intra prediction 153); and coding modes based on temporal prediction (“INTER modes” or Inter prediction) based on motion estimation 154 and motion compensation 155.

[0053] In spatial prediction (i.e. the INTRA mode), an INTRA coding block is generally predicted from encoded pixels at its causal boundary (i.e. pixels around or near a boundary/boundaries of reconstructed or available coding block(s) in the same frame or image) by a process called INTRA prediction. A predictor for each pixel of the INTRA coding block thus forms a predictor block. Depending on which pixels are used to predict the INTRA coding block, various INTRA modes may be used: for example, a DC mode, a planar mode and angular modes. Temporal prediction (i.e. the INTER mode) comprises identifying, in a previous or future frame, called a reference frame (image/picture) 168, a reference area (e.g. a reference image/picture portion) which is deemed to be the closest to the coding block by a motion estimation module 154. This reference area constitutes the predictor block for this coding block. This coding block is then predicted using the predictor block to determine/compute the residue or residual block by a motion compensation module 155. In both spatial and temporal prediction, a residue or residual block is determined/computed by subtracting the obtained predictor block from the coding block. In the INTRA mode, a prediction mode is encoded. In the INTER mode, an index indicating/identifying the reference frame/image/picture used and a motion vector indicating/identifying the reference area in the reference frame/image/picture are encoded. However, in order to further reduce bitrate cost related to motion vector encoding, a motion vector may not be directly encoded. Indeed, assuming that the motion is homogeneous, it is particularly advantageous to encode the motion vector as a difference between this motion vector, and a motion vector (or motion vector predictor) from its surroundings. So only a difference, obtained by the“Mv prediction and coding” module 160, also called a residual motion vector, may be encoded in the bitstream. The value of each encoded vector is then stored in a motion vector field 161. The neighbouring motion vectors, used for the prediction, are then extracted from this motion vector field 161. The HE VC standard uses three different INTER modes: an Inter mode, a Merge mode and a Merge Skip mode, which mainly differ from each other by the signalling of motion information (i.e. the motion vector and its associated reference frame/image/picture through its reference frame/image/picture index) in the bitstream 101. For the sake of simplicity, terms motion vector and motion information are interchangeably used hereinafter. Regarding motion vector prediction, HE VC provides several candidates of motion vector predictor that are evaluated during a rate-distortion competition in order to find the best motion vector predictor or the best motion information for the Inter or the Merge mode. An index corresponding to the best predictor or the best candidate for the motion information is then inserted in the bitstream 101 so that a decoder can derive the same set of predictors or candidates and use the best one according to the decoded index.

[0054] Then a coding mode (INTER or INTRA mode) optimizing a rate-distortion criterion for the coding block currently being processed is selected by the Selection module 156. In order to further reduce redundancies within the obtained residue data, a transform, typically a DCT, is performed on the residual block by the transform module 157, and a quantization is performed on the obtained coefficients by the quantization module 158. The quantized block of coefficients is then entropy coded by the entropic coding module 159 and the result is inserted into the bit- stream 101.

[0055] The encoder 150 then performs decoding of each of the encoded blocks of the frame/image/image portion for a future motion estimation in the modules 173 to 177. These steps allow the encoder 150 and the decoder 100 to have the same reference frame(s)/image(s)/picture(s) 168. To reconstruct the coded frame, each of the quantized and transformed residual blocks is dequantized by the dequantization module 173 and inverse transformed by the inverse transform module 174 in order to obtain/provide“reconstructed” residual block in the pixel domain. Due to loss from the quantization, this“reconstructed” residual block differs from the original residual block obtained/processed at the MV prediction and coding module 160. So according to the coding mode selected at the selection module 156 (INTER or INTRA), this“reconstructed” residual block is added to either an INTER predictor block by a motion compensation module 176 or an INTRA predictor block by an intra prediction module 175, to obtain a pre-reconstructed block (a coding block). Then, the reconstructed blocks are filtered in a post filtering module 177 by performing one or several kinds of post filtering (comprising a deblocking filtering according to the present invention) to obtain the final/actual “reconstructed” blocks (the coding blocks). The same post filters (comprising the deblocking filtering according to the present invention) are integrated at the encoder (in the decoding loop 173-177) and at the decoder to remove compression (quantisation) artefacts. They are used in the same way (or in a consistent manner) in order to obtain the same reference frames/images/pictures at both encoder and decoder sides so that the same reference frames/images/pictures can used during both encoding and decoding processes.

[0056] Figure 1C shows a system 191 195 comprising at least one of an encoder 150 or a decoder 100 and a communication network 199 according to embodiments of the present invention. According to an embodiment, the system 195 is for processing and providing a content (for example, a video and audio content for displaying/outputting or streaming video/audio content) to a user, who has access to the decoder 100, for example through a user interface of a user terminal comprising the decoder 100 or a user terminal that is communicable with the decoder 100. Such a user terminal may be a computer, a mobile phone, a tablet or any other type of a device capable of providing/displaying the (provided/streamed) content to the user. The system 195 obtains/receives a bitstream 101 (in the form of a continuous stream or a signal - e.g. while earlier video/audio are being displayed/output) via the communication network 199. According to an embodiment, the system 191 is for processing a content and storing the processed content, for example a video and audio content processed for displaying/outputting/streaming at a later time. The system 191 obtains/receives a content comprising an original sequence of images 151, which is received and processed (including filtering with a deblocking filter according to the present invention) by the encoder 150, and the encoder 150 generates a bitstream 101 that is to be communicated to the decoder 100 via a communication network 191. The bitstream 101 is then communicated to the decoder 100 in a number of ways, for example it may be generated in advance by the encoder 150 and stored as data in a storage apparatus in the communication network 199 (e.g. on a server or a cloud storage) until a user requests the content (i.e. the bitstream data) from the storage apparatus, at which point the data is communicated/streamed to the decoder 100 from the storage apparatus. The system 191 may also comprise a content providing apparatus for providing/streaming, to the user (e.g. by communicating data for a user interface to be displayed on a user terminal), content information for the content stored in the storage apparatus (e.g. the title of the content and other meta/storage location data for identifying, selecting and requesting the content), and for receiving and processing a user request for a content so that the requested content can be delivered/streamed from the storage apparatus to the user terminal. Alternatively, the encoder 150 generates the bitstream 101 and communicates/streams it directly to the decoder 100 as and when the user requests the content. The decoder 100 then receives the bitstream 101 (or a signal) and performs filtering with a deblocking filter according to the invention to obtain/generate a video signal 109 and/or audio signal, which is then used by a user terminal to provide the requested content to the user.

[0057] [A HEVC luma deblocking filter]

[0058] A (in-loop) luma deblocking filter used in HEVC is described herein as an example of how a finer (or high resolution or more complex) deblocking filter might be used in an encoding and decoding context. It is understood that a deblocking filter according to an embodiment of the present invention, may be used with such a luma deblocking filter or any other luma deblocking filter (or indeed any other deblocking filter for any component).

[0059] Figure 2 shows a vertical boundary/edge/border 201 between two blocks P and Q of samples (of a component). This vertical boundary 201 is 4 samples long, and the luma deblocking in HEVC is based on the first and last rows of luma samples 203, 205. In this example the blocks P and Q are of samples from the same component. It is understood that according to another embodiment, the blocks P and Q may be of any values associated with a pixel or other elements of a pixcl/imagc portion such as components corresponding to a particular block of pixel(s) or a block of pixel(s) for the image portion. In this embodiment, for example, each pixel of such a block corresponds to a set of samples for each component, i.e. the chroma or luma component. Hereinafter, the samples of the blocks P and Q are collectively referred to as a sample set 200. Samples pJi are in block P, which has been processed/encoded before block Q, which contains samples qJi. It is understood that although a vertical boundary is described herein, the present invention is not limited thereto. For example, an embodiment of the present invention may be used with a horizontal or any other type (such as a diagonal, angular, oblique or non-linear) boundary/edge/border between two or more blocks with appropriate modifications such as defining samples nearest or adjacent to the boundary as pOi & qOi (i.e. J=0) with J increasing with increase in the sample’s distance from the boundary.

[0060] Figure 3 shows a block diagram illustrating a luma deblocking filter used in HEVC. The goal of using a deblocking filter (DBF in short) is to reduce blocking artefacts usually caused by transform coding: as transform coefficients are quantized, imperfect reconstruction occurs in modules 103 and 104 shown in Figure 1A and modules 173 and 174 shown in Figure IB. This can lead to discontinuities at the boundaries (e.g. at the vertical boundary 201) of transform or prediction blocks (i.e. at the borders or edges each of which is a dividing boundary/interface between at least two blocks or two or more groups of samples/pixels). This can be one of the most distracting artefacts for a viewer besides those caused by reduced coding efficiency.

[0061] In HEVC, a luma deblocking filter is applied using following three steps: [0062] (1) filtering type determination at a vertical or horizontal boundary, usually based on whether elements (i.e. samples or pixels) on one of the sides of the boundary were encoded in the INTRA mode, or there is a motion discontinuity at this boundary;

[0063] (2) calculation of a measure (akin to a“flatness” or“smoothness” indication) and comparison thereof with a threshold, whereby whether to filter a set of samples/pixels is determined and if so which filter to use is selected; and

[0064] (3) once the filter to use has been selected, filtering said samples, usually by using a linear combination of their values and imposing restrictions on the range of the linear combination result around said values.

[0065] An example of a boundary strength (bS) determination for steps (1) & (2) can be described by the following table:

[0066] - bS table

[0067] So, for P & Q, depending on its prediction type (INTRA or INTER), presence/absence of residual data and/or motion information, bS can take a value among 0, 1 or 2. So such a boundary strength determination is not dependent on the actual sample values (e.g. pJi or qJi), and instead they rely on coding information such as the mode or the motion information. If bS is zero, no filtering occurs for that set of samples (or corresponding pixels), and the following filter control steps are not performed (i.e. skipped).

[0068] It is understood that use of a deblocking filter according to an embodiment of the invention is not limited to use thereof with these particular three steps or the luma deblocking filter from HEVC. For example, other embodiments of the invention may be used with a different boundary strength (bS) definition which defines additional or alternative boundary strength value (e.g. a“4”) based on whether a(nother) condition has been satisfied or not.

[0069] Once it is determined that a filtering is going to be performed, deblocking filter parameters need to be determined to perform step (3). For a HEVC luma deblocking filter, luma control parameters are selected using two parameters b and tc. A threshold, for limiting/restricting a level of flatness or variation in values across more than one sample, is derived from bS value as well as a variable qP L , offsets set at the frame or slice level (slice_beta_offset_div2 or slice_tc_offset_div2), quantization parameters (QPs) such as QPq & QPp of the blocks on each side of the boundary, and bit depth values BitDepthy of luma and BitDepthc of chroma . For example, formulae for determining these parameters in HEVC are:

[0070] For b', qP L = ( ( Qp Q + Qpr + 1 ) » 1 ),

Q = Clip3( 0, 51, qP L + ( slice_beta_offset_div2 « 1 ) ), where Q is the luma quantization parameter, the variables QP Q and Qpp are quantizer steps (i.e. quantization parameters) of blocks Q & P respectively, which are set equal to the variable Qpy values of coding units which include P & Q containing the sample qOo & pOo; and

the mapping from Q to b' is provided in the table shown below.

[0071] For tc', Q = Clip3( 0, 53, qP L + 2 * ( bS - l ) + ( slice_tc_offset_div2 « 1 ) ), and the mapping from Q to tc' is provided in the same table shown below.

[0072] Then, to obtain b & tc, b' & tc', are scaled according to bitdepths as follows:

b = b' * ( 1 « ( BitDepthy - 8 ) ) ; and tc = tc' * ( 1 « ( BitDepthy - 8 ) ).

[0073] Note, Clip3(min, max, val) is a function wherein: if val < min, the result is min; if val > max, the result is max; and otherwise, the result is val.

[0074] Note,“»” is a bitwise right shift operator and“<<” is a bitwise left shift operator.

[0075] - Q -ίo-b'/u' mapping table

[0076] Figures 3 shows a block diagram illustrating the HEVC luma deblocking filter. At step 310, it is checked whether the boundary for a set of samples lies on a Prediction Unit (PU) or a Transform Unit (TU) boundary. Step 311 checks whether the boundary strength bS (e.g. as defined based on the bS table previously discussed) is a non-zero value. If yes, the boundary is to be filtered (if no, the boundary is not to be filtered). Various gradient strengths (e.g. a measure on how much the neighbouring or nearby samples vary) are to be measured/considered in step 312. For example, a total sum (dm ma ) of these strengths, which indicate a flatness or variation in the values across more than one samples from each side of the boundary (i.e. a measure or an indicator representative/indicative of a flatness/variation), is measured and compared with b. In HEVC, this comparison is based on following conditions using the first and last rows of luma samples 203, 205 in Figure 2:

d ma = | p2o 2plo + pOo | + | p2 3 2pU + p0 3 | + | q2o 2qlo + qOo | + | q2 3— 2ql 3 + q0 3 1; and dLuma ^ b.

[0077] If this sum is low enough (i.e. lower than b), at least one block on a side of the boundary is considered flat (uniform) enough for blocking artefacts to be visible, and the deblocking process continues to step 313. Otherwise, the set of samples is considered not to need filtering, and its deblocking ends at step 318. [0078] It is noted that the determination of b & tc discussed previously can also be performed at step 312 since the parameters may be dependent on positions relative to the boundary (because the QPs on the P side may vary since, for example, chroma & luma partitions may not fully coincide/align with each other from use of QTBT).

[0079] Luma samples can be filtered using a strong 316 or a weak 315 filtering in HE VC. In order to decide which filtering to use, gradients or measures of variation on each side of the boundary are measured. In HEVC, these are then compared with b using following conditions:

| p2i - 2pL + pOi | +| q2i - 2qh + qOi | < b/8 ; and | p3i - pOi | + | qOi— q3i | < b/4.

[0080] If both these conditions are met, the strong filtering is applied, otherwise the weak filtering is applied. Another filter decision to make relates to which side of the boundary the filtering will be applied. To reduce complexity, such a decision may reuse some of the variables calculated in the above conditions (e.g. for d ma), for example: if | p2o - 2plo + pOo | + | p2 3 - 2ph + p0 3 | < (b + b/2)/8, then P samples are indeed filtered; and if | q2o - 2qlo + qOo | + | q2 3 - 2ql 3 + q0 3 | < (b + b/2)/8, then Q samples are indeed filtered.

[0081] Depending on the luma filter type chosen, HEVC can filter all qJi or pJi, or only some specific ones. For example, both filters use a convolution over (e.g. a linear combination of) samples around the sample being processed. The result of said convolution/linear combination may directly (as is the case for strong filtering) or indirectly (e.g. weak filtering) be used in filtering, replacing or modifying a sample.

[0082] In HEVC Luma strong filter, for samples p2i to pOi and qOi to q2i, a linear combination of themselves with weighting is computed. For example, if any one of the samples does not belong in the sample set 200, it is omitted and the weights for the rest of samples in the linear combination are then adjusted accordingly. Without listing all of them (e.g. they are symmetrical for qji), this results in the following expressions for strong filtering each luma sample in HEVC:

[0083] p2i = Clip3(p2i - 2*tc, p2i +2*tc, (2*p3i + 3*p2i + pL + pOi + qOi + 4)»3) ;

[0084] pL = Clip3(pli - 2*tc, pL +2*tc, (p2i + pL + pOi + qOi + 2)»2) ; and

[0085] pOi = Clip3(p0i - 2*tc, pOi +2*tc, (p2i + 2*pL + 2 pOi + 2*q0i + qL + 4)»3).

[0086] In HEVC Luma weak filtering, whether to weak filter a complete 1D (1 dimensional) subset is decided based on: D = ( 9*(q0i - pOi) - 3*(qL - pL) + 8 ) » 4; and D < l0*tc.

[0087] If the condition is met, the weak filtering for qOi and pOi are performed using:

D = Clip3(-tc, tc, D); pOi = Clip lY( pOi + D ); and qOi =CliplY( qOi - D ), with

Clipl Y(val) = Clip3(minY, maxY, val) and rninY and maxY being respectively the minimal and maximal sample values, i.e. 0 & 255 for 8 bits content. Then, if pL is to be filtered (according to conditions depending on b), the following applies:

Dr = Clip3( -( tc»l ), tc»l, ( ( ( p2 i + pO i + 1 ) » 1 ) - pl i + D ) » 1 ); and

plf = CliplY( pL + Dr ).

Similarly, if qL is to be filtered, the following applies: Aq = Clip3( -( t c »l ), t c »l , ( ( ( q2 i + qOi + 1 ) » 1 ) - ql ί D ) >> 1 ); and qli' = Cliply( qli + Aq ).

[0088] [A HEVC chroma deblocking filter]

[0089] A (in-loop) chroma deblocking filter used in HEVC is described herein as an example of how a coarse (or low resolution or simpler) deblocking filter might be used in an encoding and decoding context. A deblocking filter according to an embodiment of the invention improves or replaces such a chroma deblocking filter. However, it is understood that any other deblocking filter for a component may be improved or replaced according to another embodiment of the invention, provided a measure or parameter for deblocking another component is available for use. It is also understood that a deblocking filter according to an embodiment of the invention may be used in addition to the HEVC chroma deblocking filter or any other deblocking filter as an additional option to be chosen/selected depending on which works the best at reducing blocking artefacts or more efficiently.

[0090] The decision on whether to use the HEVC chroma deblocking filter is based on bS as defined in the bS table discussed above. The HEVC chroma deblocking filter is used only when bS>l (i.e when bS is 2), i.e. for when at least one of blocks P & Q is coded in INTRA.

[0091] For a chroma deblocking filter control parameter in HEVC only tc is computed based on chroma samples, as there is no operation that involves b. So for chroma deblocking control parameter, qPi is evaluated using a formula“qPi = ( (QP Q + Qpp + 1) » 1) + cQpPicOffset” with cQpPicOffset specifying a picture-level chroma quantization parameter offset, and QP Q & Qpp being quantization parameters for luma. This value is then used to derive a qPc value (either through a mapping table or restricting to a value below 51). Then a value Q (the chroma quantization parameter) is evaluated using a formula “Q = Clip3 ( 0, 53, Qpc + 2 + ( slice_tc_offset_div2 « 1 ) )”. In the same way as the HEVC luma deblocking filter, tc' can be determined with this Q value from Q-to-P'/tc' mapping table shown above. Then tc' is scaled to obtain tc: tc = tc' * ( 1 « ( BitDepthc - 8 ) ). Other rules for filtering each side of the boundary are similar to the luma deblocking filter discussed previously.

[0092] In HEVC, the chroma deblocking filter is applied by setting (with parameters):

[0093] A = Clip3( -tc, t c , ( ( ( ( q0 1 - p0 1 ) « 2 ) + pl 1 - ql 1 + 4 ) » 3 ) ) ;

[0094] pOf = Clip lC( p0i + A ); and qOf = CliplC( q0; - A ).

[0095] As evident from above, in HEVC the decision whether to apply a chroma deblocking filter is made based on a relatively simpler condition using chroma sample values (in comparison to the HEVC luma deblocking filter decision), and a relative simpler filtering parameters (determined using the chroma sample values) are used to filter/adjust only the samples nearest/adjacent to the boundary (in contrast to the luma deblocking filter which filters/adjusts also sample values not nearest/adjacent to the boundary but also away from it as long as they are in block P or Q.

[0096] [A chroma deblocking filter according to an embodiment of the present invention] [0097] Figures 4A & 4B show block diagrams illustrating deblocking filters according to embodiments of the present invention, which can be used as alternatives or in addition to the chroma deblocking filter of HEVC (e.g. in conjunction with the luma deblocking filter of HEVC or any other luma deblocking filter with finer control/higher resolution, which may be a result of using a chroma subsampling to sample the image). Figure 4A shows a method/process of controlling a filter for filtering one or more portions of an image (e.g. the block P or Q), wherein the method comprises a step 1000 of controlling filtering on first component samples (e.g. the chroma samples) of the one or more portions of the image based on sample values of a second component (e.g. the luma component) of the image. Such a controlling may comprise determining use (or disuse/not using) of the filtering on the first component samples (e.g. the chroma samples) based on a variation (e.g. indicated/represented by a measure of variation/smoothness/flatness on either or both sides of the boundary/interface/border/edge 201 or across the boundary/interface/border/edge 201) among the sample values of the second component (e.g. the measure di Uma for use with filtering the chroma samples is determined based on the luma samples).

[0098] According to an embodiment, the determining is performed for filtering a block of the first component samples (e.g. the chroma samples) based on the measure among sample values from at least two neighbouring (i.e. adjacent/nearest to, or forming, the boundary 201) blocks (e.g. P and Q) of the second component samples (e.g. the luma samples). The measure indicates variation among at least some (e.g. three or more) of the sample values in the at least two neighbouring blocks (e.g. a row 203, 205 of the sample values along a direction perpendicular to the boundary 201) and/or across the boundary 201 between the at least two neighbouring blocks (e.g. dn ma based on sample values pJi & qJi with i={0,3} and J=0..3) of the second component samples (e.g. the luma samples), and the filtering is used on a corresponding boundary between at least two neighbouring blocks of the first component samples (e.g. the chroma samples). The at least two neighbouring second component blocks (e.g. P and Q, or the luma sample blocks) comprise second component blocks corresponding to at least two first component blocks (e.g. the chroma blocks) which have the corresponding boundary (on which the filter is used) there-between. A corresponding block/boundary is a collocated/associated/related block/boundary (e.g. at the same or corresponding position, and may also be of the same size), e.g. a corresponding block of the second component block (e.g. a luma block) associated with/related to/corresponding to the same pixel/element of the image portion as the first component block (the chroma block) or a corresponding boundary between first component blocks (e.g. the chroma blocks) that is between first component samples associated with/related to/corresponding to the same pixel/element of the image portion as the second component blocks (the luma blocks) forming the boundary. So this corresponding block of the second component samples (e.g. the luma samples) and the block of the first component samples (e.g. the chroma samples) may be associated with (related to) the same block of elements/pixels of the image, for example. [0099] The controlling 1000 may comprise either enabling or disabling the filtering based on a condition using the measure. The controlling 1000 may also comprise determining, based on the second component samples (e.g. the luma samples) or the aforementioned measure, one or more filtering parameter(s) for use with the filtering.

[0100] It is understood that according to another embodiment, the measure is based on a block of the second component samples (e.g. the luma samples) that are not collocated with/corresponding to the first component samples (e.g. the chroma samples) but on second component samples (e.g. the luma samples) that have already been processed and belong to a related/associated portion of the image or a reference image (e.g. a reference frame or a reference picture before a deblocking filter has been applied thereon so that at least one blocking artefact is present therein).

[0101] As shown in Figure 4B, according to an embodiment the controlling 1000 comprises: obtaining 1100 the measure (e.g. by calculating the measure of variation from luma samples or receiving data indicative of an already calculated measure); and controlling 1200 based on the obtained measure. According to an embodiment, the controlling 1200 based on the obtained/calculated measure comprises: comparing the obtained/calculated measure with a threshold (e.g. a threshold based on a function of any combination of one or more of tc, bS, b, QPq, QPp, pJi, & qJi evaluated based on either, or both of, luma and chroma samples); and determining, based on this comparison, whether to use a filtering on first component samples (e.g. the chroma samples), and/or if it is determined to use the filtering, a filtering parameter for use with the filtering. The sample values of the second component samples (e.g. the luma samples) of the image may be reconstructed second component sample values.

[0102] According to an embodiment, when used with the HE VC luma deblocking filter, a chroma deblocking filter according to the embodiment of Figure 4A or 4B makes use of an already available (i.e. previously calculated) measure of dm ma from the HEVC luma deblocking filter so it does not require further resources and/or an increase in complexity. Also, it is able to apply different rules/conditions based on a function of any combination of one or more of bS, b, QPq, QPp, pJi, & qJi, evaluated based on either, or both of, luma and chroma samples. This function may be the same as that used for a threshold in the HEVC luma deblocking filter or different in order to take into accounts of differences between the chroma samples and the luma samples, e.g differences between those corresponding to/associated with the same image portion, i.e. a group of elements/pixels of the image. Also, it is able to determine, based on the measure, a filtering parameter for use with the filtering, again taking into account of the aforementioned considerations.

[0103] As mentioned above, the rules/methods for the chroma filtering in HEVC are coarse, in particular compared to what is applied for the luma filtering in HEVC. This is partly because for 4:2:0 format content, chroma does not have very high frequency content, and conventional luma filtering methods for 4:4:4 format are not very effective at filtering chroma despite having high implementation/complexity costs. This is also why there is no HEVC chroma filtering on a boundary of INTER coded blocks (i.e. only when bS=2, the HEVC chroma filtering is used). So according to an embodiment of the invention, at least some of the conditions and/or parameters for the chroma deblocking filter are based on measures or decisions taken based on the luma samples. According to an embodiment, some of these may be further refined/adjusted or controlled by chroma sample based parameters/variables. Compared to the chroma deblock filtering in HEVC, a chroma deblocking filter according to an embodiment of the invention which is based on the method of Figures 4A or 4B requires only limited complexity increase (as no additional bandwidth is consumed for determining other chroma sample based parameters and variables for filtering the chroma samples) for a substantial coding efficiency or visual improvements, in particular for INTER coded frames.

[0104] The rules/methods for the HEVC chroma filtering are coarse also in the sense that the number of chroma samples affected by the filtering is very limited/restricted. So it would be desirable to increase this number. However, depending on how the elements/pixels are divided or the samples are sampled, this can be troublesome due to availability, or a lack thereof, of relevant samples on which the filtering depends. For example, QTBT (QuadTree plus Binary Tree), being considered by JVET (Joint Video Exploration Team on Future Video Coding of ITU-T VCEG and ISO/IEC MPEG), and any other encoding/decoding structure format with complex divisions and hierarchy therein can suffer from such issues.

[0105] Figure 5A shows splitting (partitioning or division) of an (digital) image into blocks of pixels as they are used during an encoding or decoding in HEVC. The first type of block of interest here is a square unit called the Coding Tree Block (CTB) 401, which is then divided into smaller square units, usually known as blocks but also known as coding units (CU) in HEVC, according to a quadtree structure (a tree structure where leaves are split into four sub- leaves until a leaf node, i.e. a non-divisible/non-split node, is reached). Looking at CU 402 in Figure 5A, there are two further splits of said CU 402. The prediction partitioning can be 2Nx2N or NxN for INTRA mode coding, and any of 2Nx2N, Nx2N, 2NxN, as well as the Asymmetrical Motion Partitions (AMP) nLx2N, ..., 2NxnB for INTER mode coding. Each partition is then called a prediction unit (PU). The Residual Quad Tree (RQT) splits said coding unit CU 402 into smaller square transform blocks, which are called transform units (TU). The quadtree structure allows efficient indication of how a CU is split into TUs and PUs.

[0106] Figures 5B & 5C show splitting (partitioning or division) of an (digital) image based on a QuadTree plus Binary Tree (QTBT), which is being considered by JVET (Joint Video Exploration Team on Future Video Coding of ITU-T VCEG and ISO/IEC MPEG). The QTBT is a structure where a CTB is first split into a quadtree, and when a termination leaf is reached, the corresponding block is further split (divided or partitioned) by a binary tree into horizontal or vertical parts/splits, both types being symmetrical. Figures 5B & 5C illustrate a partitioning in JVET with a quadtree split represented by a continuous line and a binary split by a dashed line. A block of pixels (a CTB) 410 is divided into different areas/portions/blocks 420 to 431 in Figure 5B, and a corresponding signalling/indicator 450 of the splits/divisions 460 to 471 (i.e. with an offset of 40 between reference numerals of corresponding split/division and signalling/indicator) is shown in Figure 5C. So CTB 410 is split by a quadtree 450 into 4 blocks and corresponding nodes, as evident from the continuous lines. Each block is then split again into (sub-)blocks (and corresponding nodes) if its corresponding split flag is 1. If the split flag is 0, the corresponding node is a leaf node and the block isn’t further (sub)divided by the quadtree. One can thus see that blocks 420 and 430 (and respectively leaf nodes 460 and 470) are not further (sub)divided by the quadtree, while block 440 (and corresponding node 480) is. Next comes the binary tree for each leaf node of the quadtree (if it hasn’t reached a minimal size preventing it from being split further). For instance, a binary split flag 460 for block 420 indicates a vertical symmetrical split (a split along a vertical line/edge/border/boundary with a symmetry about the vertical line/edge/border/boundary). Block 421 corresponds to the leaf node 461 and is thus not split further. Block 430 is on the other hand split into 2 symmetrical horizontal areas, among which is block 431 that is not split further, as evident from its corresponding end leaf node 471. The binary tree starting at node 460 has two vertical nodes (vertically divided/split blocks), one of which being a non-split/non-divisible block 421 (i.e. a leaf node 461). The other one is further split along a vertical line (edge/border/boundary) into two blocks 422 & 423 (i.e. two leaf nodes 462 & 463), which are not split further.

[0107] It is understood that the present invention is not limited to a particular way these splits and signalling thereof are performed as long as the minimal possible dimensions/sizes of the blocks/image portions are determinable (e.g. when it is possible to determine whether a particular minimum dimensioned/sized block/image portion is a CU/PU, then the determination comprises determining whether the block/image portion is a CU/PU, or the minimal possible dimensions/sizes may be specified as a number of CUs/PUs contained therein or a number of pixels/samples across/high/contained therein). Accordingly, an embodiment of the invention would also work with alternative partitioning of (digital) images (e.g. other structures/formats for partitioning such as using ternary splits as other ways of splitting and signalling the split/division of the (digital) image). For example, it would also work when different structures/formats are used for partitioning different component samples corresponding to the same image portion/block, e.g. two QTBT trees, one for luma samples and one for chroma sample, are used for intra slices, and a single QTBT tree for both luma and chroma sample blocks are used for inter slices. It is noted that use of different structures/formats in intra slices means that boundaries for luma sample blocks may not directly correspond with boundaries for chroma sample blocks, and vice versa. In such a case, an embodiment of the invention takes into account these differences and specifies a criterion or condition for establishing a corresponding/association/collocation relationship between a boundary for the luma sample blocks and a boundary for the chroma sample blocks based on each boundary’s position, size and/or neighbouring luma or chroma sample blocks. According to an embodiment, it takes the chroma coordinates of the boundary, determines/finds the corresponding luma coordinates based on the criterion/condition, and thus deduces/determines the corresponding luma samples. [0108] Also, when processing partitions of an image, various estimating/rounding of a position of a boundary, and/or suppression/ignoring/skipping of the boundary (e.g. rounding the position to one of predefined positions) may also be performed. Having a part of a luma block boundary overlapping (e.g. crossing) a part of a chroma block boundary is also possible. It is understood that embodiments of the invention can easily be modified to account for such differences in partitioning of the image (e.g. differences in respective boundaries of luma & chroma samples).

[0109] For example, Figure 5D shows different types of boundaries, and how such a modification to the processing of relevant blocks/coding units might be achieved is described below in relation to the boundaries shown in Figure 5D. Top two figures of Figure 5D show different vertical & horizontal boundaries that could exist in the HE VC partitioning structure/format. HEVC processes all block boundaries of the image in a first pass 501 (i.e. a single vertical sweep) and then a second pass (a corresponding single horizontal sweep) 502. The design of the HEVC deblocking filter is such that, for a given pass, all boundaries to be filtered can be filtered at the same time in parallel. This is made possible because there is no overlap among the vertical or horizontal boundaries to be processed, for instance respectively boundaries 510 & 511, or 520 & 521, do not have an overlap. A filtering for one boundary should not modify an output or input for filtering another boundary, or for that matter, for any boundary of the same pass, e.g. the filtering for boundary 511 shall not affect the filtering for boundaries 510 and 512.

[0110] Referring to Figure 3, one can see that in some cases the HEVC luma deblocking filter can require up to four samples on each side of a boundary. If filters requiring even more samples are to be considered, then this increase in samples being filtered can cause a problem. This is because if any of pJi samples were modified by e.g. filtering the boundary 511, then a potential dependency problem (e.g. unavailability of the updated value) occurs for the filtering of the boundary 512 since it depends on values of at least some of these pJi samples (to be output from the filtering of the boundary 511). Similarly, the filtering of the boundary 511 may also affect an input or output for filtering the boundary 510. To prevent undefined/inconsistent behaviour, a filtering order for these boundaries would then need to be specified, thus necessitating a sequential filtering (i.e. parallel processing is no longer possible). However, such an ordering is undesirable as this introduces further complexity (such as delay and buffering). Therefore, HEVC limits filtering capability (i.e. limits the range of samples/pixels that can be filtered when filtering for a particular boundary). Because a filtering unit/block (i.e. a unit or a block based whereon the filtering is performed) for HEVC is an 8x8 sample block, and the maximum filter supported has a range size of 4, which guarantees that no overlap can occur. This is despite the fact that INTRA coding allows 4x4 CUs, and generally 4x4 TUs are used. As a consequence, internal boundaries of such CUs, or boundaries of TUs that would be internal to CUs but fall inside of an 8x8 filtering unit (e.g. a 8 x 8 sample grid of a component), are simply not filtered, although it is known that blocking artefacts can still occur there. This is particularly true for chroma samples where, if filtering parameters for the HE VC luma deblocking filter were used without modification, in order to filter a second sample away from the boundary (e.g. pli or qli which are not adjacent to the boundary 201), third samples (e.g. p2i and q2i) would need to be accessed. These are more likely (compare to the luma samples) to overlap with other filterable/to be filtered chroma samples of another neighbouring boundary when the used colour format means chroma samples have lower resolution/density than the luma samples for the same image portion (e.g. from Chroma subsampling or using a colour format such as 4:2:0). This means if the chroma filtering parameters are modified without taking this into account, it would not be possible to perform the deblocking filters on different boundaries in parallel, and potentially cause undefined/unexpected behaviour from the filtering process.

[0111] Bottom two figures of Figure 5D show different vertical & horizontal boundaries that might exist in a QTBT based partitioning structure/format. QTBT allows dimensions of 4 (and thus corresponding 2 in chroma samples, as it has half the vertical and horizontal sampling). Furthermore, the filtering is allowed to be performed on a 4x4 block basis. So one solution for the dependency problem (i.e. the unavailability issue) in the QTBT based partitioning would be to simply disable the chroma filtering for the relevant boundaries. The bottom two figures of Figure 5D illustrate how such a solution might work. For a given CTB (e.g. of a size 128x128 as shown here), any boundary that could have an overlapping portion is not filtered. Such not- to-be-filtered boundaries are represented by dashed lines in Figure 5D.

[0112] Therefore, instead of just using the same luma filtering conditions and control parameters (where values for a set of luma samples used for these conditions and control parameters can change due to the filtering of a nearby boundary), embodiments of the present invention described hereinafter reuses a result of the convolution/linear combination of previous luma (and potentially chroma) samples and adapts/adjusts it accordingly for a finer control of the chroma deblocking filter. In addition, further controls are also possible based on additional conditions. This enables the embodiments of the present invention to filter more chroma samples without a significant increase in the worst-case-scenario dependency case, resulting in a substantial coding efficiency and visual improvements.

[0113] Figure 6 shows a block diagram illustrating a deblocking filter with a luma sample based measure for a chroma filter on/off/bypass control according to an embodiment of the present invention. Referring to Figure 6, the step 1000 of controlling filtering on first component samples (e.g. the chroma samples) in Figures 4A & 4B comprises a first determination step 601 (determining whether to use the chroma deblocking filter or not) or a second determination step 603 (determining whether to bypass/skip the chroma deblocking filter or not) in Figure 6, with optional further modifications to steps 602 and 603. Steps 601, 605, 607-610, and 606 & 611 in Figure 6 are analogous to steps 311, 312, 313-317, and 318 in Figure 3 respectively. Based on the result of the first or second determination step 601, 603, the chroma filter is performed at step 604. In more detail, step 600 computes/determines/obtains a measure for a first level of filtering (control) decision by determining/selecting the boundary strength (bS) for a current boundary currently being assessed/considered/processed. Then, step 601 checks whether the current boundary (i.e. a boundary currently being processed or considered for filtering) is to be filtered.

[0114] It is understood that new strength measures or other parameters from the HE VC example may be derived at this step, in which case the decision/determination in step 601, 603 and the filtering in step 604 can be further adapted/modified to account for this.

[0115] For example, according to an embodiment, it is checked in HEVC whether the boundary strength bS for luma is 0, and if bS for luma is 0 then it is determined that no chroma filtering occurs for the current boundary either, and processing ends at step 611. If the chroma filtering needs to be performed, the processing continues with step 602. This embodiment assumes that the first level filtering (control) decision based on bS based condition are the same for both the luma and chroma filtering. However, it is understood that according to another embodiment a modification to this first level condition may also be implemented. For example, different conditions at the first level may be assessed for the luma and chroma filtering, in which case the luma filtering decision and the chroma filtering decision paths are split into separate paths/flows with step 602 being in the luma filtering decision path (with the outcome of step 602 feeding into the chroma filtering decision path).

[0116] Then a second level filtering (control) decision on whether to bypass the chroma filtering (similar to step 605 or 312 for the luma filtering) takes place at step 603. In order to make the determination at step 603 (e.g. using a di uma based condition), a measure (or measures) related to the luma samples are obtained and evaluated at step 602. For example, this may comprise computing/evaluating/obtaining the same measures as in HEVC (H.265), or any other one which may be adopted for JVET/H.266. After the relevant measure(s) has been obtained/determined, the condition based determination step 603 is performed. At least based on the measure(s) based on the luma samples, whether to filter (or bypass) a corresponding chroma boundary is determined/decided. It is understood that“corresponding” here has a broad meaning, e.g. the chroma boundary may not be completely overlapping (or collocated) with the luma edge (e.g. they may only partially overlap or have a corresponding/associated/collocate position), in which case adaptations/modifications are needed. For example, in such a case the alternative deblocking filter shown in Figure 8 may be used.

[0117] According to other embodiments of the present invention, step 603 may involve a determination based any one or more of the following conditions. If the luma filter bypass decision is based on condition“ di uma < b” as in step 605, then the condition for the chroma filter bypass decision is based on“ di uma < (bS*P)/2”. This however assumes the same definition is used for determining bS as in the aforementioned bS table. In other cases, this condition may instead be:“ di uma < TABFE[bS]*P”, wherein“TABFE[bS]” function maps a bS value to another value. It is understood that other variations/modifications in this, or indeed any other, condition (e.g. to achieve an integer representation and/or desired type of rounding) are also possible. For example, with b being an integer and, bS ranging from 0 to 2 inclusive, and the function/table TABLE being [0, 3, 8], this condition could be:“ di uma < (TABLE[BS]*P+4)/8

[0118] Another possibility is to not use b computed/determined for luma but instead to derive it in a similar fashion as t c for the chroma samples as discussed for the aforementioned HE VC chroma deblocking filter control parameter determination. Q value is computed as described therein (i.e. using“Q = Clip3 ( 0, 53, Qpc + 2 + ( slice_tc_offset_div2 « 1 ) )”), while taking into account of differences with luma in parameters/functions such as: cQpPicOffset to obtain qPi (instead of qPi/) ; Clipping (in the [0; 53] range instead of [0; 51]) and table lookups of the value ; and bitdepth of the chroma channels.

[0119] Alternatively, to enable an even finer control based on the link/relationship between the luma and chroma (samples), one can introduce a chroma offset for b, which can for example be signaled relative to a default value (e.g. 0 or 2 or a negative value) or relative to the (luma) offset for b. Use of such an offset can be useful for scenarios such as when a network camera is working in a low-light condition, where the amount of noise in each component (on which increasing the deblocking offset usually has a positive effect in terms of coding efficiency since the deblocking filter can also act as a low-pass/blurring filter) can vary compared to when working in daylight, and may thus benefit from tuning the decision for different times of the day/night using this offset. Changing the offset may be implemented by a setting in a Graphical User Interface (GUI) for setting daylight or night conditions. Another situation when use of such an offset can be useful is when a sensor used to capture the content/image experiences a larger noise (warranting increased offsets for better handling/removal of noise) for a particular component due to its low performance quality, compared to e.g. a more advanced sensor. Yet another example would be when making a content-based decision: e.g. when some content have a very particular chroma content, such as sports, medical or nature content (e.g. a foliage, an endoscopic view, a body scan), or video surveillance (faces and license plates). The encoder may thus adapt the offset according to the targeted content, which can also be set by an operator through a GUI (e.g. displayed on a mobile phone, an MRI medical device or a Video Management System), e.g. depending on the location where the image is (being) captured and/or the reason for capturing the image. According to yet another alternative, the b offset used for the HEVC luma samples may just be used for the chroma (samples) as well.

[0120] In any case, the formula for obtaining/determining Q then becomes: Q =

Clip3( 0, 53, QpC + ( slice_chroma_beta_offset_div2 « 1 ) ). This requires looking up the chroma quantization parameters (Qpp/Qp q , which depend on chroma QP offsets) so as to determine/compute/evaluate Qpc. So according to an embodiment, instead, the formula for Q is: Q c hroma = Clip3( 0, 51+N, Q Luma +N ), wherein Q ma is the Q value determined using the HEVC luma control parameter determination (e.g. using the luma control parameters selected based on two parameters b and tc, i.e. using the Q-to- ft/ic mapping table) and N (an offset representative of the relation between the luma and the chroma). A new b oΐitoiha can then be computed in all the above embodiments, and used instead of b in the above formula, e.g.: “ dLuma < (bS*Pchroma)/2

[0121] As an example, following values for N (an offset representative/indicative of the relation between the luma and the chroma) may be used: 0, which enables the same range of slice-level control as for luma; 2, as is seen in other QP-related controls in HEVC; or observing that error at a given QP is larger for luma than chroma, a negative value such as -2.

[0122] Another consideration to make is that a decoupling of luma and chroma boundaries may occur. For instance, as discussed above, for intra slices (made of only INTRA-coded blocks), the QTBT trees for luma and chroma are separate (i.e. can be different) so a boundary for luma may not (completely) correspond to any boundary for chroma. For example, the boundary strength dS for the (partially) corresponding chroma boundary may be set to 0. In such a case, an additional criterion/condition may be required so that bSchr o m a for chroma is not 0 (i.e. no chroma filtering is not unintentionally/automatically determined). Consequently, use of a separate/different bSchr o m a instead of the bS for luma may be implemented in all or some of the above conditions which include bS.

[0123] In all cases, this indicates 2 alternative approaches to controlling a chroma deblocking filter: offering/enabling an even more nuanced/accurate/fmer control by taking into account of additional information for chroma, whether they are based on luma samples or not; and moving the chroma filtering decision further away from the luma filtering decision steps, albeit this may still result in the same functional behaviour, as in the alternative filter of Figure 8.

[0124] Referring back to Figure 6 and how steps 601 and 603 may involve a determination based any one or more of the different aforementioned conditions, if the chroma filtering has been determined to be used (and not bypassed), then step 604 occurs and a chroma boundary is filtered (i.e. the filter is applied to chroma samples around/near the chroma boundary).

[0125] According to an embodiment, the chroma filter applied is the chroma filter for HEVC or JVET (i.e. the HEVC or JVET chroma filtering parameters are used). According to an alternative embodiment, the chroma filter (and its other filtering parameters) is further determined/decided based on step 603 (and/or 601). This determination, again, is best illustrated with reference to Figure 8. This embodiment concerns, however, the fact that in HEVC no deblocking filter is applied on chroma when bS=l . So, if step 603 is modified to force/perform filtering of chroma even at bS=l, then this filtering behavior can be altered. An example would be to modify the operations of the HEVC chroma deblocking filter, i.e. to modify its setting by: computing/determining parameters such as D differently, e.g. by using a value of a lower magnitude (e.g. dividing by 2); and/or restricting the allowed/possible range of the modified values, e.g. by using“tc/2” instead of“tc” in the filtering parameters.

[0126] Then the conventional luma filtering resumes at step 605, which determines whether to filter the luma boundary, e.g. by comparing du ma to b as in HEVC or JVET. Again, it is understood that a modified condition or a modified step 605 may be used for steps 605 onwards.

[0127] According to another embodiment, wherein step 603 in Figure 6 is not performed separately for the chroma filter and step 604 is moved to be between the steps 605 and 607, the chroma filtering decision at step 603 is the same as the luma one at step 605: if filtering is to be applied to luma so is the case for chroma and vice versa. Regarding the case where no filtering for luma occurs, step 606 executes this decision, e.g. by updating data or simply by doing nothing, and the filtering for the current luma boundary stops/terminates at step 611. Otherwise, determination of a gradient(s) or a measure(s) and an assessment of a condition based thereon for making the luma filter type decision in HEVC (e.g. a strong 316 or a weak 315 filtering from steps 313-317 in Figure 3) are performed at step 607. As shown in the filter of Figure 7 or Figure 8, a choice between the strong and weak filters at step 608 depends on the gradient(s)/measure(s). For example, a filtering parameter decision is taken, whether it concerns a decision on whether to filter just one side of the boundary or selecting a filter type. Referring back to Figure 6, if it is to be applied/used, the weak filtering is performed at step 610 according to its control parameters, and otherwise the strong filtering is performed at step 609. Then, the processing ends at step 611.

[0128] Figure 7 shows a block diagram illustrating a deblocking filter with a luma sample based measure for a chroma filter type control according to an embodiment of the present invention. Figure 7 is very similar to Figure 6. All steps 600 to 610 have counterparts 700 to 710, except that step 703 has been modified compared to 603, and step 604 has been removed as steps 720 to 723 have been added to use different types of Chroma filters 1 & 2 instead. Therefore, description for Figure 7 is similar to Figure 6 so repetition thereof has been omitted and only the new steps/parts are described in the following.

[0129] At step 703, a decision to filter chroma, according to the steps already described for Figure 6, is made for later use: in some embodiments, this may comprise storing the result of this decision/determination (e.g. data indicative of the decision made) in a buffer (memory, array or cache). Processing as per Figure 6 continues until a (luma) filter type (e.g. weak or strong) decision/determination is to be made at step 708 or 709 (608 or 609). However, for the weak filter, as it is known that a particular type of luma filter is applied to the luma boundary, a particular chroma filter (e.g. Chroma filter 2) may be applied at step 723 to the corresponding chroma boundary if chroma needs to be filtered as determined/checked at step 722.

[0130] Indeed, there may be no chroma boundary (e.g. due to use of different QTBTs as already mentioned) to filter, or there may be more than 1 corresponding chroma boundaries if the luma boundary lies in a position overlapping two chroma boundaries. In such a case, the determination made at step 720 or 722 may be modified accordingly, e.g. by determining to not perform chroma filtering if there is no corresponding chroma boundary.

[0131] Use of a strong filter for luma may imply that another type of chroma filtering (chroma filter 1) is to be applied at step 721, if the check/determination at 720 allows it. The difference between chroma filters 1 & 2 according to an embodiment will be described later. For example, either chroma filter 1 or chroma filter 2 may be any one of the chroma deblocking filter or even luma deblocking filters (either weak or strong) as defined in HEVC. The difference between the two filters may also be in the way how a filtering parameter is determined/evaluated, e.g. the parameter D in the chroma filtering defined in HE VC is determined/evaluated differently using a different expression/formula as described herein. In any case, once determinations for use of both luma and chroma boundaries filtering have been made, and the filtering thereof has been performed in the case it has been determined to be used, the processing ends at step 711.

[0132] Figure 8 shows a block diagram illustrating an alternative deblocking filter with a luma sample based measure for controlling a chroma deblocking filter according to an embodiment of the present invention. Figure 8 illustrates, in more detail, how luma sample based information (i.e. luma information) might be used during a chroma deblocking process. Differences between the embodiments shown in Figure 7 & Figure 6 underline how luma information can be used to control the chroma deblocking in various ways. However, in these embodiments of Figure 7 & Figure 6, all such operations (e.g. use of the luma information for controlling the chroma deblocking) are nested within (added to) an existing luma filtering process. Figure 8 illustrates another embodiment, which may be functionally similar to some of the embodiments based on Figure 7 & Figure 6, albeit implemented in a different way.

[0133] In this embodiment, filtering of the chroma samples uses filtering parameters determined based on the luma samples. In the case where QTBT trees for chroma and luma are unaligned (e.g. when they are different), these parameters from luma may be available but unused, or unavailable due to no corresponding luma boundary existing for a given chroma boundary. In addition, obtaining/determining chroma filtering parameters data (e.g. on previously processed blocks/units) may need accumulation of these data over time so an initial value for starting this accumulation and use thereof with the first filtering process is needed. For example, if the luma sample based parameters are not available or not suitable for use due to timing or alignment issues, then the initial/default value set during step 800’s initialization process can be used. Therefore, step 800 initializes the process by setting default filtering parameters (i.e. provides a default/initial value for filtering parameters that are to be retrieved in step 810 and used in step 811). For example, such default parameters may be: “no filtering”; depending on the slice type, e.g. if an intra slice, filter normally, and otherwise no filtering; and/or initialization of information if it has been accumulated in the previous run (e.g. if step 804 comprises evaluating information such as a parameter di-otai = di-otai + d Luma _for_this_unit, then diotai must have an initial/default value for in this evaluation to take place).

[0134] According to an embodiment, as a plurality of (i.e. multiple) sets of filtering parameters (depending on the number of chroma boundaries affected) may need to be stored in a buffer, this initialization comprises setting default parameters for all of them and storing them in the buffer.

[0135] Step 801 then obtains/receives/identifies/selects a first luma filtering unit/block to filter. This filtering unit/block may correspond to a fixed (predetermined) number of samples, or a complete boundary of a partition (i.e. a boundary of a unit of partition such as a coding/processing/prediction). It is understood that this process may be performed to all boundaries identified in a filtering pass for the image, or to a current CTB (CTB currently being processed), which could be advantageous as it puts an upper limit on the buffer size requirements. A boundary is thus determined/identified, as well as relevant samples and coding units/blocks on each side of the boundary. This enables determining the luma filtering parameters at step 802, as well as determining luma filtering activities such as: whether the luma boundary has been filtered; which side has to be/has been filtered; the luma filter type (to be) used; and which samples or lines (rows) or columns have been filtered, and/or their count.

[0136] Step 803 then tries to find a position. This position relates to a selection of a storage for (e.g. an access to) the filtering parameters, e.g. a position of a filtering parameter in the aforementioned buffer. According to an embodiment, the storage thereof may be“compressed”, i.e. of a smaller size than the number of luma units/blocks, so as to save memory needed for caching them. In such an embodiment, a filtering parameter stored in the same position is used for more than one luma units/blocks. According to an embodiment, such a filtering parameter is accessed/retrieved/obtained by dividing an index or coordinate of the current luma unit/block by a number N, ideally a power of 2, and if needed, rounding (i.e. modifying or correcting) the divided value an integer value representing a position of each stored filtering parameter. This number N can also be linked to the sampling of the luma and chroma channels: in the 4:2:0 chroma format, for instance, each chroma channel has a quarter of the number of luma samples, in which case N=4 may be more suitable.

[0137] According to an embodiment, a similar approach can be taken when deblocking units for luma and chroma are not aligned. For example, if a 4x4 (filtering) unit/block (of luma samples) is used for luma (which corresponds to a 2x2 filtering unit/block for chroma in 4:2:0) but a 4x4 filtering unit/block (of chroma samples) is also used for chroma, in that case N can be 2 (for dividing a coordinate of a chroma filtering parameter to obtain the relevant luma based filtering parameter) or 4 (for dividing an index of the chroma filtering parameter to obtain an index/position of the relevant luma based filtering parameter), as each minimum length chroma boundary corresponds to two minimum length luma boundaries. That is, when 4x4 chroma filtering unit corresponds to a luma block of 8x8 samples, then information/parameters related to only one 4x4 luma filtering unit (i.e. 1 in 4) may be used. According to another embodiment, a l-to-l mapping is used: for each luma (or chroma) unit/block, there is a corresponding storage unit in the buffer. It is understood that suitable storage position (and suitable embodiment to use therewith) depends on step 804.

[0138] Step 804 performs determination of a control parameter for chroma (filter). As already mentioned, one chroma boundary may correspond to more than one (e.g. two) luma boundaries. Therefore, for such a case, according to an embodiment step 804 may include updating an accumulator/updater for any given number of luma boundaries (based on a C-to-L mapping between C chroma boundaries/samples/blocks and L luma boundaries/samples/b locks). In an embodiment, a parameter d mn fs values are accumulated/combined/updated (and stored) to generate a chroma parameter dchroma‘s value. In another embodiment, to obtain dchroma a plurality of dm ma values are weighted before they are accumulated/combined using the following: for a luma index/position that corresponds to a luma index/position that is one luma unit/block less than the storage adjusted chroma index/position, its weight is 1; for a luma index/position corresponding to the current storage adjusted chroma index/position, its weight is 2; and for a luma index/position that is one luma unit/block more than the storage adjusted chroma index/position, its weight is 1. For example, if idx is used to represent said position/index for an allocation in the storage, such a weighting corresponds to following formula:“ dchroma[idx/N] = dLuma[idx- l ] + 2*di um a[idx] + dLuma[idx+l ] It is understood that there are various ways to perform this accumulation (i.e. di U ma based determination of dchroma). According to an embodiment, there is a l-to-l mapping so that such an accumulation/determination is based on:“ dchroma[idx] = dLuma[idx]

[0139] In another embodiment, these“d” values can be divided or clipped to be inside a certain range so that the storage for dchroma, and thus the buffer, has a (pre-)defined/de finite limit for its size. For instance, a saturating 8-bit accumulator might be sufficient, and an embodiment with such an accumulation might therefore use: dchroma[idx] = Clip3(0, 255, dLuma[idx- l ]) ; dchroma[idx] = Clip3(0, 255, dchroma[idx] + 2*di uma[idx] ) ; and dchroma[idx] = Clip3(0, 255, dch roma [idx] + 2*dLuma[idx+l ]).

[0140] According to an embodiment, it is also possible to count/determine the number of luma boundaries to be/is being/has been filtered, in a similar fashion: assuming that a value fL uma [idx] is set to 0 if d Luma [idx] < (bS[idx]*P[idx])/2, otherwise f Luma [idx] is set to 1, one gets the following filtering parameters: " fc hroma [idx/N] = f Luma [idx-l] + 2*fL uma [idx] + f Luma [idx+l]

So fLuma is defined as a binary result of a comparison (i.e. 0 or 1), and fchroma is a parameter that is being accumulated as the process is performed on the luma boundary/sample/block. The eventuaFfinal value (i.e. the value to be used during encoding/decoding process) for fchroma can be retrieved at step 810. For example, if the accumulated/eventual/final value is 2 or above, then the chroma boundary/edge at idx (thus fchroma[idx]) is filtered, otherwise not (i.e. if the luma boundary/edge at idx, or both luma boundary/edges at idx-l and idx+l are filtered, then the corresponding chroma boundary/edge is also filtered).

[0141] In another embodiment, determination equivalent to step 603 may also be performed. In such an embodiment, step 804 is therefore very similar to step 703. In yet another embodiment, more control parameters for the luma filtering are stored/used for chroma filtering parameter determination/derivation. For example, all or some of the luma filtering parameters determined at step 802 might be stored/used for such a use with the chroma filtering.

[0142] Now that the luma (and the chroma, if already determined/derived) filtering parameters have been updated or stored, an actual filtering of a current luma unit/block occurs at step 805. Then whether all the luma units/blocks have been considered for filtering is checked/determined at step 806. This may correspond to determining/identifying whether the current unit/block is the last one in the CTB, or the image. If it is not, then step 807 obtains/receives/identifies/selects the next luma unit/block before looping back to step 802. Otherwise, a chroma filtering starts at step 808 by obtaining/receiving/identifying/selecting the first chroma unit/block to process conditions/parameters assessed/considered similar to those used in step 801, e.g. if the sets of units/blocks to filter is a CTB or an image, or if it is a fixed amount/number of samples, or all samples around/related to a boundary are to be processed.

[0143] Similar to step 803, step 809 determines a position of the current chroma unit/block, said position being for use in accessing/selecting/obtaining relevant stored information stored at step 804. This may be simply using an index or a coordinate of said current chroma unit/block. However, if compression of filtering parameter storage has been performed (in which case a position in the storage may not be the same as the index/coordinate of said current chroma unit/block), this may require again some processing to derive/determine/obtainthe“compressed” position in the storage, e.g. by dividing said index/coordinate by a number potentially differing from the one used for the luma position based storage position derivation at step 803 discussed above. Filtering parameters can then be obtained/retrieved at step 810. Based on the obtained/retrieved filtering parameters, filtering of the chroma unit/block can then be performed.

[0144] For example, according to an embodiment such a filtering is performed by using one or more of following determinations/assessments: if the corresponding luma unit(s)/block(s) have been filtered, then the chroma unit/block is also filtered; if the count of luma units/blocks, or count of filtered columns or lines (rows), or count of luma samples filtered, or a combination thereof such as fc hroma [idx], is strictly above a threshold (e.g. 1 for fc hroma [idx]), then the chroma unit/block is filtered; and/or if one side of a luma boundary has been filtered, then the corresponding side of the chroma boundary is also filtered.

[0145] According to an embodiment, as determining the other chroma control parameters (e.g. tc) in the same way as the HE VC chroma deblocking filter parameters provides sufficient information for determining/computing a chroma equivalent of luma b parameter, the determination/check of step 603/703 (e.g. a second level of filtering (control) decision on whether to bypass the chroma filtering, similar to step 605, 705 or 312 for the luma filtering, at step 603) may be performed at this stage (i.e. at time of performing step 810 or right thereafter).

[0146] The stored information is the accumulated dm ma values for use with the current position and is then compared to an appropriate threshold. According to an embodiment, for a l-to-l mapping without accumulation (e.g. dc hroma [idx] = d Luma [idx] ) such a comparison is based on a condition: “ dc hroma [idx] < (bSc hroma [idx]*Pc hroma [idx])/2”. According to an embodiment, if there is an accumulation, (i.e. dc hroma [idx/N] = d Luma [idx-l] + 2*di uma [idx] + d Luma [idx+l]), then this comparison is based on a condition: “ dc h roma [idx] < 2*bS ch roma [ c]*b c hroma [idx] It is understood that by accumulation, we mean a case where determining dc hroma requires access to data/information about more than one dm ma value, i.e. data/information relating to more than one luma boundaries/blocks.

[0147] In another embodiment wherein a default initialization is performed for the stored filtering parameters, such an initialization enables identifying/distinguishing and then processing accordingly among different cases where: there is no corresponding luma boundary; the corresponding luma boundary has been filtered; and the corresponding luma boundary has not been filtered. If there is no corresponding luma boundary, a default filtering parameter such as a default dchroma value can be used. This can be set so as to force the filtering or non-filtering of the corresponding chroma boundary. For instance, for a boundary having at least one INTRA block, this filtering may be performed/enabled, but this filtering is disabled otherwise, or the final decision is deferred until conformance with at least one other condition has been assessed.

[0148] According to some of the embodiments described herein, the determination of dchroma[idx] values does not use any chroma samples, and only luma samples are used to determine these values. Other embodiments also show how (e.g. in addition to the features of the previous embodiment) some syntax related to chroma (i.e. values having a dependency on the chroma samples) might be used to enable derivation of the bSchroma[idx] and Pchroma[idx] for use with a chroma deblocking filter. This is advantageous because such independence from chroma samples and separate chroma filtering parameters avoid potential availability/dependency issues whilst enabling finer control of the chroma deblocking filter by using the luma samples with higher resolution and chroma filtering parameters that takes into account of differences between the chroma and luma samples.

[0149] According to another embodiment of the present invention, more chroma samples (more than the HE VC chroma deblocking filter) are filtered by the chroma deblocking filter as described herein. In the foregoing description for the embodiments shown in Figure 6, Figure 7 & Figure 8, it was mentioned that the luma control may consist in selecting chroma filters with additional/cxtra length/range (e.g. in relation to chroma filter 1 at step 721 or chroma filter 2 at step 723 in Figure 7) than the length/range in HEVC. To achieve this, according to an embodiment, the weak filtering of the pli and qli samples from the luma weak filtering in HEVC are reused. However, a drawback of this approach is that it requires access to more samples (i.e. the luma samples), thereby incurring a memory bandwidth cost, and possibly preventing it from being used in partitions smaller than 4 in the filtered direction, as there may be an overlap in chroma samples associated with the filtering. Furthermore, in the case of QTBT, there may be two partitions of two samples side by side. This leads to a problem of incompatibility with a parallel processing, which, according to an embodiment can be alleviated/solved by forbidding the filtering of a boundary between such units/blocks. However, this then means blocking artefacts around such a boundary remain/are unfiltered.

[0150] According to an embodiment, as an alternative to just forbidding the filtering, following filtering parameters (e.g. based on chroma samples) are used to enable filtering of a larger number of samples without requiring more samples as input: Dr = D»1 ; Aq = D»1 ; pli' = CliplC( pli + Dr ); and qli' = CliplC( qli - Aq ). The determination/computation for Aq and Dr are therefore very simple, and does not rely on (i.e. independent of) the samples pJi and qJi, with J > 1 (i.e. samples that are more than 2 samples away from the boundary). This means it is possible to perform parallel processing as there is no dependency on the filtering order. According to yet another embodiment, formula for the parameters Dr and Aq can be modified, for instance to take into account of a unit/block type in the corresponding side, e.g. if the Q block is an inter coded block, DV = D»2, otherwise DV = D»1. Samples pli & qli can also be filtered independently, i.e. one is filtered while the other is not filtered or use a different filtering parameter/method: e.g. compared to what happens in the luma weak filtering, a reason for doing this could be different sizes of the chroma coding unit/block on the relevant side of the boundary.

[0151] In yet another embodiment, values of pli & pOi samples for pli/, and qli & qOi for qli' are considered when determining Aq & Dr, e.g.: Dr = Clip3( -abs(p0i -pli)»5, abs(p0i - pli)»5, D»1); and Aq = Clip3( -abs(q0i -qli)»5, abs(q0i -qli)»5, D»1). It is understood that these are provided herein as an illustration, and different embodiments of the present invention might use these sample values in different ways to obtain Aq & Dr, and eventually pli,' & qli,'. In another embodiment, if D is not zero, then Dr and Dr are set to -1 or 1 depending on the signs of (pOi -pli) and (qOi -qli).

[0152] A reason for filtering more samples is to enable use of different types of filters comprising a relatively/comparatively stronger filter (implemented by filtering more samples), e.g. step 721.

[0153] Figure 9A shows a combination of boundary filtering disabling/enabling according to an embodiment of the present invention, illustrating use of the aforementioned determination/computations. Thick black boundaries are processed normally. However, the dashed and dotted ones are more problematic, as they cannot be processed in parallel. It is still desirable to process as many boundaries at least partially in parallel as possible. Therefore, an embodiment comprises disabling filtering of dotted boundaries 901, 902 & 903. It is however desirable to filter the corresponding dashed boundaries 910 to 914, potentially even more so than other filter boundaries, as 901 to 903 are not being filtered. So according to an embodiment, following is performed: filter at least the qli samples (pli optional) of 910 according to the above formula; filter at least the pli samples (qli optional) of 911; filter at least the qli samples (pli optional) of 912; filter at least the pli and qli samples of 913; and filter at least the pli samples (qli optional) of 914.

[0154] It is understood that according to another embodiment, a deblocking filter according to an embodiment described above (say the“new filter”) may be used in combination with a known filter, for example only using the new filter when certain conditions are met and using the known filter if not, or vice versa. Figure 9B shows different types of boundaries and how they are processed according an embodiment of the present invention. For example, according to an embodiment, a rule/condition for deciding/determining a filter size (i.e. whether to use the known filter or the new filter capable of filtering more samples) is as follows: if right part/side of boundary 910 is known to have 2 partitions of size 2, right part of 910 will filter at most samples qOo and qlo, and only them, based on values of samples qOo, qlo and q2o; no filter is applied to 901 (e.g. it is skipped); and left part/side of boundary 911 can only filter sample q3o using at most samples q2o & q3o, which are on its left.

[0155] Also, if there is a symmetry about the boundary 910, e.g. the left part/side of 910 has similar distribution of samples/boundaries, then the same rule is applied symmetrically. For example, the same rule is applied on the boundary 921 (as described for the boundary 901), and 922 (as described for the boundary 911), and replacing the p samples with their counterpart q samples in the above rules.

[0156] [Other Embodiments]

[0157] Any step of the method/process according to the invention or functions described herein may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the steps/functions may be stored on or transmitted over, as one or more instructions or code or program, or a computer-readable medium, and executed by one or more hardware-based processing unit such as a programmable computing machine, which may be a PC (“Personal Computer”), a DSP (“Digital Signal Processor”), a circuit, a circuitry, a processor and a memory, a general purpose microprocessor or a central processing unit, a microcontroller, an ASIC (“Application-Specific Integrated Circuit”), a field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term“processor” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques describe herein.

[0158] Embodiments of the present invention can also be realized by wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of JCs (e.g. a chip set). Various components, modules, or units are described herein to illustrate functional aspects of devices/apparatuses configured to perform those embodiments, but do not necessarily require realization by different hardware units. Rather, various modules/units may be combined in a codec hardware unit or provided by a collection of interoperative hardware units, including one or more processors in conjunction with suitable software/firmware.

[0159] Embodiments of the present invention can be realized by a computer of a system or apparatus that reads out and executes computer executable instructions (e.g., one or more programs) recorded on a storage medium to perform the modules/units/ functions of one or more of the above-described embodiments and/or that includes one or more processing unit or circuits for performing the functions of one or more of the above-described embodiments, and by a method performed by the computer of the system or apparatus by, for example, reading out and executing the computer executable instructions from the storage medium to perform the functions of one or more of the above-described embodiments and/or controlling the one or more processing unit or circuits to perform the functions of one or more of the above-described embodiments. The computer may include a network of separate computers or separate processing units to read out and execute the computer executable instructions. The computer executable instructions may be provided to the computer, for example, from a computer- readable medium such as a communication medium via a network or a tangible storage medium. The communication medium may be a signal/bitstream/carrier wave. The tangible storage medium is a“non-transitory computer-readable storage medium” which may include, for example, one or more of a hard disk, a random-access memory (RAM), a read only memory (ROM), a storage of distributed computing systems, an optical disk (such as a compact disc (CD), digital versatile disc (DVD), or Blu-ray Disc (BD)™), a flash memory device, a memory card, and the like. At least some of the steps/functions may also be implemented in hardware by a machine or a dedicated component, such as an FPGA (“Field-Programmable Gate Array”) or an ASIC (“Application-Specific Integrated Circuit”).

[0160] Figure 10 is a schematic block diagram of a computing device 1300 for implementation of one or more embodiments of the invention. The computing device 1300 may be a device such as a micro-computer, a workstation or a light portable device. The computing device 1300 comprises a communication bus connected to: - a central processing unit (CPU) 1301, such as a microprocessor; - a random access memory (RAM) 1302 for storing the executable code of the method of embodiments of the invention as well as the registers adapted to record variables and parameters necessary for implementing the method for encoding or decoding at least part of an image according to embodiments of the invention, the memory capacity thereof can be expanded by an optional RAM connected to an expansion port for example; - a read only memory (ROM) 1303 for storing computer programs for implementing embodiments of the invention; - a network interface (NET) 1304 is typically connected to a communication network over which digital data to be processed are transmitted or received. The network interface (NET) 1304 can be a single network interface, or composed of a set of different network interfaces (for instance wired and wireless interfaces, or different kinds of wired or wireless interfaces). Data packets are written to the network interface for transmission or are read from the network interface for reception under the control of the software application running in the CPU 1301; - a user interface (UI) 1305 may be used for receiving inputs from a user or to display information to a user; - a hard disk (HD) 1306 may be provided as a mass storage device; - an Input/Output module (IO) 1307 may be used for receiving/sending data from/to external devices such as a video source or display. The executable code may be stored either in the ROM 1303, on the HD 1306 or on a removable digital medium such as, for example a disk. According to a variant, the executable code of the programs can be received by means of a communication network, via the NET 1304, in order to be stored in one of the storage means of the communication device 1300, such as the HD 1306, before being executed. The CPU 1301 is adapted to control and direct the execution of the instructions or portions of software code of the program or programs according to embodiments of the invention, which instructions are stored in one of the aforementioned storage means. After powering on, the CPU 1301 is capable of executing instructions from main RAM memory 1302 relating to a software application after those instructions have been loaded from the program ROM 1303 or the HD 1306, for example. Such a software application, when executed by the CPU 1301, causes the steps of the method according to the invention to be performed.

[0161] It is also understood that any result of comparison, determination, assessment, selection, execution, performing, or consideration described above, for example a selection made during an encoding or filtering process, may be indicated in or determinable/inferable from data in a bitstream, for example a flag or data indicative of the result, so that the indicated or determined/inferred result can be used in the processing instead of actually performing the comparison, determination, assessment, selection, execution, performing, or consideration, for example during a decoding process.

[0162] While the present invention has been described with reference to embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. It will be appreciated by those skilled in the art that various changes and modification might be made without departing from the scope of the invention, as defined in the appended claims. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and/or all of the steps of any method or process so disclosed, may be combined in any combination, except combinations where at least some of such features and/or steps are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise. Thus, unless expressly stated otherwise, each feature disclosed is one example only of a generic series of equivalent or similar features.

[0163] In the claims, the word“comprising” does not exclude other elements or steps, and the indefinite article“a” or“an” does not exclude a plurality. The mere fact that different features are recited in mutually different dependent claims does not indicate that a combination of these features cannot be advantageously used.

[0164] It is understood that where a HEVC compliant method/process or device (e.g. an encoder, a decoder, a luma deblocking filter, or a chroma deblocking filter of HEVC) is described in relation to an embodiment of the present invention, not all features of the HEVC compliant method/process or device need to be included in the embodiment. As long as those features that interact with other parts of the embodiment (e.g. depending on the embodiment, receiving/obtaining luma samples or a HEVC luma deblocking filter’s filtering parameter setting step) are included, the embodiment of the invention can be put into effect.

[0165] It is also understood that according to another embodiment of the present invention, a decoder according to an aforementioned embodiment is provided in a user terminal such as a computer, a mobile phone (a cellular phone), a tablet or any other type of a device (e.g. a display apparatus) capable of providing/displaying a content to a user. According to yet another embodiment, an encoder according to an aforementioned embodiment is provided in an image capturing apparatus which also comprises a camera, a video camera or a network camera (e.g. a closed-circuit television or video surveillance camera) which captures and provides the content for the encoder to encode.




 
Previous Patent: A FILTER

Next Patent: CRIMPING MACHINE