Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SIMPLIFIED CROSS COMPONENT PREDICTION
Document Type and Number:
WIPO Patent Application WO/2020/035837
Kind Code:
A1
Abstract:
Devices, systems and methods for digital video coding, which includes simplified cross-component prediction, are described. In a representative aspect, a method for video coding includes receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component, predicting, using a linear model, a first set of samples of the at least one chroma component based on a second set of samples that is selected by sub-sampling samples of the at least one luma component, and processing, based on the first and second sets of samples, the bitstream representation to generate the current block. In another representative aspect, the second set of samples are neighboring samples of the current block and are used for an intra prediction mode of the at least one luma component.

Inventors:
ZHANG KAI (US)
ZHANG LI (US)
LIU HONGBIN (CN)
WANG YUE (CN)
Application Number:
PCT/IB2019/056967
Publication Date:
February 20, 2020
Filing Date:
August 19, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
BEIJING BYTEDANCE NETWORK TECH CO LTD (CN)
BYTEDANCE INC (US)
International Classes:
H04N19/117; H04N19/167; H04N19/176; H04N19/186; H04N19/59; H04N19/593; H04N19/80
Domestic Patent References:
WO2017139937A12017-08-24
Foreign References:
US20160277762A12016-09-22
US20170244975A12017-08-24
US20180077426A12018-03-15
Other References:
MISRA K ET AL: "Description of SDR and HDR video coding technology proposal by Sharp and Foxconn", 10. JVET MEETING; 10-4-2018 - 20-4-2018; SAN DIEGO; (THE JOINT VIDEO EXPLORATION TEAM OF ISO/IEC JTC1/SC29/WG11 AND ITU-T SG.16 ); URL: HTTP://PHENIX.INT-EVRY.FR/JVET/,, no. JVET-J0026-v9, 13 April 2018 (2018-04-13), XP030151194
Attorney, Agent or Firm:
LIU, SHEN & ASSOCIATES (CN)
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. A method for video processing, comprising:

determining, using training samples of a luma block, a linear model used for predicting a first chroma block; and

determining, using a set of samples of the luma block and the linear model, samples of the first chroma block;

wherein the training samples or the set of samples are determined without using a multi tap downsampling filter.

2. The method of claim 1, wherein the linear model comprises a first parameter, alpha, and a second parameter, beta, and wherein the alpha and beta are derived from the training samples by sub-sampling training samples.

3. The method of claim 2, wherein the sub-sampling uses non-consecutive luma samples.

4. The method of claim 1 , wherein the set of samples correspond to luma samples co located with the samples of the first chroma block.

5. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block.

6. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block.

7. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

8. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

9. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples to left of the luma block and wherein the set of samples of the luma block corresponds to an upper right pixel of the luma block.

10. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a upper right pixel of the luma block.

11. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples to left of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

12. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

13. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples above and left of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

14. The method of any of claims 1-4, wherein the training samples corresponds to neighboring samples above, left, above-right and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

15. A method for video processing, comprising:

determining, using training samples of a luma block, a linear model used for predicting a first chroma block; and

determining, using a set of samples of the luma block and the linear model, samples of the first chroma block;

wherein the training luma samples are limited to luma samples neighboring the luma block that are at positions used for an intra prediction process.

16. The method of claim 15, wherein a chroma sample in the first chroma block is determined by applying a 2-tap filter to a first sample and a second sample in the set of samples of the luma block at a position corresponding to a position of the chroma block.

17. The method of claim 16, wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above neighboring samples.

18. The method of claim 16, wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above and above right neighboring samples.

19. The method of claim 16, wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left neighboring samples.

20. The method of claim 16, wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left and left- bottom neighboring samples.

21. The method of any of claims 16 to 20, wherein the two-tap filter is an averaging filter.

22. The method of any of claims 16 to 21, wherein the two-tap filter averages a sum of one plus the first sample plus the second sample.

23. A method of video processing, comprising:

determining, for a conversion between a current video block and a bitstream

representation of the current video block, selectively based on a rule related to the current video block comprising a luma block and a first chroma block, a cross-component prediction scheme used for generating samples of the first chroma block from a set of samples of the luma block; and

generating the samples of the first chroma block from the cross-component prediction scheme,

wherein the cross-component prediction scheme is one of:

a first cross-component prediction scheme that uses a linear model generated from training samples of the luma block such that the training samples and the set of samples are determined without using a multi-tap downsampling filter; or

a second cross-component prediction scheme in which the training luma samples are limited to luma samples neighboring the current video block at positions used for an intra prediction process;

wherein the rule specifies to select the cross-component prediction scheme depending on a video region to which the current video block belongs.

24. The method of claim 23, wherein the cross-component prediction scheme selected for the current video block is different from another cross-component prediction scheme selected for another video block in the video region.

25. The method of claim 24, wherein the video region comprises a video slice.

26. The method of claim 24, wherein the video region comprises a video picture.

27. The method of claim 24, wherein the video region comprises a sequence of video pictures.

28. The method of claim 23, wherein the coding condition is signaled in the coded representation for the current video block.

29. The method of claim 23, wherein the coding condition is not explicitly signaled in the coded representation for the current video block.

30. The method of claim 28 or 29, wherein the rule specifies to select the cross-component prediction scheme based on a shape or a size of the current video block.

31. The method of any of claims 28 or 29, wherein:

the training samples are selected from:

neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block;

neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block;

neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block;

neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block;

neighboring samples to left of the luma block and wherein the set of samples of the luma block corresponds to an upper right pixel of the luma block;

neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a upper right pixel of the luma block;

neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block;

neighboring samples above and left of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block; or

neighboring samples above, left, above-right and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block; and/or wherein the first chroma block is determined by applying a 2-tap filter to a first sample and a second sample in the set of samples of the luma block at a position corresponding to a position of the first chroma block;

wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above neighboring samples;

wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above and above-right neighboring samples;

wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left neighboring samples; or

wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left and left-bottom neighboring samples.

32. The method of claim 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies disabling the cross-component prediction scheme due to W<=Tl and H<=T2, where Tl and T2 are integers.

33. The method of claim 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies disabling the cross-component prediction scheme due to W*H <=Tl.

34. The method of claim 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W<=Tl and H<=T2, where Tl and T2 are integers, and wherein the specific cross-component prediction scheme is signaled in the coded representation.

35. The method of claim 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W<=Tl and H<=T2, where Tl and T2 are integers, and wherein the specific cross-component prediction scheme is not signaled in the coded representation.

36. The method of claim 32, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W*H <=Tl, and wherein the specific cross-component prediction scheme is signaled in the coded representation.

37. The method of claim 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W> N*H, and wherein the specific cross-component prediction scheme is not signaled in the code representation, where N is an integer.

38. The method of claim 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W> N*H, and wherein the specific cross-component prediction scheme uses only pixels from above or above-right neighboring samples, and wherein the specific cross component prediction scheme is signaled in the coded representation.

39. The method of claim 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to N*W < H, and wherein the specific cross-component prediction scheme is signaled in the code representation, where N is an integer.

40. The method of claim 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W> N*H, and wherein the specific cross-component prediction scheme uses only pixels from left or left-bottom neighboring samples.

41. The method of any of claims 34 to 40, wherein Tl = 2.

42. The method of any of claims 10 to 17, wherein T2 = 2.

43. The method of any of claims 10 to 17, wherein Tl = 4.

44. The method of any of claims 10 to 17, wherein T2 = 4.

45. The method of any of claims 10 to 17, wherein T2 = 16.

46. A method of video processing, comprising:

determining, for a conversion between a video block of a video and a coded

representation of the video block, samples of a first chroma block of the video block from a luma block of the video block;

wherein the samples of the first chroma block correspond to a weighted combination of a first intermediate chroma block and N second intermediate chroma blocks,

wherein the first intermediate chroma block is generated from a first set of samples of the luma block using a first cross-component linear model, wherein the first cross-component linear model is generated using a first training sequence of luma samples; and

wherein the N second intermediate chroma blocks are generated from N second sets of samples of the luma block using N second cross-component linear models, wherein the N second cross-component linear models are generated using N second training sequences of luma samples, where N is an integer.

47. The method of claim 46, wherein at least some of the first cross-component linear model and N second cross-component linear models are different from each other within a region of the video, where the region of video corresponds to a slice or a picture or a video sequence.

48. The method of any of claims 46-47, wherein a coded representation includes an identification of the first cross-component linear model and the N second cross-component linear models.

49. The method of claim 47, wherein the identification is included in a sequence parameter set level or a picture parameter set level or a slice header or a group of coding tree unit or a coding tree unit or a coding unit level.

50. The method of any of claims 46-49, wherein coded representation includes an indication of the first cross-component linear model and/or the N second cross-component linear models based on a rule that depends on a width W of the video block or a height H of the video block.

51. The method of claim 50, wherein the indication is included at a sequence parameter set level or a picture parameter set level or a slice header level or a groups of coding tree units level or a coding tree unit level or a at a coding block level.

52. The method of any of above claims wherein the method is selectively applied to the current video block based on an applicability rule related to the current video block satisfying a position condition within a current picture.

53. The method of claim 52, wherein the applicability rule specifies to apply a method of above claims due to the video block being located at a top boundary of a coding tree unit.

54. The method of claim 52 or 53, wherein the applicability rule specifies to apply a method of above claims due to the current video block being located at a left boundary of a coding tree unit.

55. A method of any of claims 1 to 54, wherein the set of samples of the luma block is generated using a downsampling process wherein a first downsampling filter is used for downsampling luma samples inside the video block and a second downsampling filter is applied to luma samples outside the video block to generate the set of samples of the luma block.

56. The method of claim 55, wherein the first downsampling filter corresponds to a legacy filter used by the Joint Exploration Model (JEM).

57. The method of any of claims 55-56, wherein the second downsampling filter downsamples luma samples above the video block to lower left and lower right positions.

58. The method of claim 55, wherein luma samples above adjacent to the video block are denoted as a[i], then d[i]=(a[2i-l]+2*a[2i]+a[2i+l]+2)»2, where d[i] represents the down- sampled luma samples, where i is a variable representing horizontal sample offset.

59. The method of claim 58, wherein, if the sample a[2i-l] is unavailable, the d[i] is determined as d[i]=( 3*a[2i]+a[2i+l]+2)»2.

60. The method of claim 58, wherein if the sample a[2i+l] is unavailable, then d[i]=(a[2i-l]+ 3*a[2i]+2)»2.

61. The method of any of claims 55-56, wherein the second downsampling filter

downsamples luma samples to outside left to top right and bottom right positions.

62. The method of any of claims 55-56, wherein the second downsampling filter

downsamples luma samples to outside left to midway between top right and bottom right positions such that if left adjacent samples to the current block are denoted as a[j], then d[j]=(a[2j]+a[2j+l]+l)»l, where d[j] represents down-sampled luma samples.

63. A method of video processing, comprising:

downsampling a set of samples of a luma block using a downsampling process, wherein a downsampling filter used in the downsampling process depends on positions of luma samples being downsampled to generate a set of downsampled samples of the luma block;

determining, from the set of downsampled samples of the luma block, samples of a first chroma block using a linear model.

64. The method of claim 63, wherein the linear model is defined by a first parameter alpha and a second parameter beta and wherein a chroma sample predc(i, j) at a position (i, j) is determined from a luma sample recL(i, j) as

predc(ij) = alpha*recL(i,j) + beta.

65. The method of any of claims 63-64, further including applying a boundary filter the samples of the first chroma block that are a region of the current video block.

66. The method of claim 65, wherein samples above adjacent to the video block are denoted as a[-l][j], samples in the ith row and jth column of the first chroma block is a[i][j], then the applying the boundary filter is performed selectively based on a value if i, and comprises calculating a’[i][j]=(wl* a[i][j]+w2* a[-l][i] where wl+w2=2N represent weights.

67. The method of claim 66 wherein the applying the boundary filtering is performed only for i<=K.

68. The method of claim 66, wherein wl and w2 are a function of row index i.

69. The method of claim 65, wherein left-adjacent samples to the video block are denoted as a[i[[-l], samples in the ith row and jth column of the first chroma block is a[i][j], then the applying the boundary filter is performed selectively based on a value if j, and comprises calculating a’[i][j]=(wl* a[i][j]+w2* a[i][-l] + 2N_1)»N, where wl+w2=2N.

70. The method of claim 69, wherein the applying the boundary filtering is performed only for j<=K.

71. The method of claim 70, wherein wl and w2 are a function of column index j.

72. The method of claim 67 or 70, wherein K = 0, wl = w2 = 1.

73. The method of claim 67 or 70, wherein K = 0, wl = 3, w2 = 1.

74. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of claims 1 to 73.

75. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of claims 1 to 73.

76. A video decoding apparatus comprising a processor configured to implement a method recited in one or more of claims 1 to 73.

77. A video encoding apparatus comprising a processor configured to implement a method recited in one or more of claims 1 to 73.

78. A method, apparatus or system described in the present document.

Description:
SIMPLIFIED CROSS COMPONENT PREDICTION

CROSS-REFERENCE TO RELATED APPLICATIONS

[0001] Under the applicable patent law and/or rules pursuant to the Paris Convention, this application is made to timely claim the priority to and benefits of International Patent

Application No. PCT/CN2019/071382, filed on January 11, 2019 and International Patent Application No. PCT/CN2018/100965, filed on August 17, 2018. For all purposes under the U.S. law, the entire disclosure of International Patent Application No. PCT/CN2019/071382 and PCT/CN2018/100965 are incorporated by reference as part of the disclosure of this application.

TECHNICAL FIELD

[0002] This patent document relates to video coding techniques, devices and systems.

BACKGROUND

[0003] In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.

SUMMARY

[0004] Devices, systems and methods related to digital video coding, and specifically, low complexity implementations for the cross-component linear model (CCLM) prediction mode in video coding are described. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC)) and future video coding standards (e.g., Versatile Video Coding (WC)) or codecs.

[0005] In one example aspect, a method of video processing is disclosed. The method includes determining, using training samples of a luma block, a linear model used for predicting a first chroma block; and determining, using a set of samples of the luma block and the linear model, samples of the first chroma block; wherein the training samples or the set of samples are determined without using a multi-tap downsampling filter.

[0006] In another example aspect, another video processing method is disclosed. The method includes determining, using training samples of a luma block, a linear model used for predicting a first chroma block; and determining, using a set of samples of the luma block and the linear model, samples of the first chroma block; wherein the training luma samples are limited to luma samples neighboring the luma block that are at positions used for an intra prediction process.

[0007] In another example aspect, another video processing method is disclosed. The method includes determining, for a conversion between a current video block and a bitstream

representation of the current video block, selectively based on a rule related to the current video block comprising a luma block and a first chroma block, a cross-component prediction scheme used for generating samples of the first chroma block from a set of samples of the luma block; and generating the samples of the first chroma block from the cross-component prediction scheme, wherein the cross-component prediction scheme is one of: a first cross-component prediction scheme that uses a linear model generated from training samples of the luma block such that the training samples and the set of samples are determined without using a multi-tap downsampling filter; or a second cross-component prediction scheme in which the training luma samples are limited to luma samples neighboring the current video block at positions used for an intra prediction process; wherein the rule specifies to select the cross-component prediction scheme depending on a video region to which the current video block belongs.

[0008] In another example aspect, another video processing method is disclosed. The method includes determining, for a conversion between a video block of a video and a coded

representation of the video block, samples of a first chroma block of the video block from a luma block of the video block; wherein the samples of the first chroma block correspond to a weighted combination of a first intermediate chroma block and N second intermediate chroma blocks, wherein the first intermediate chroma block is generated from a first set of samples of the luma block using a first cross-component linear model, wherein the first cross-component linear model is generated using a first training sequence of luma samples; and wherein the N second intermediate chroma blocks are generated from N second sets of samples of the luma block using N second cross-component linear models, wherein the N second cross-component linear models are generated using N second training sequences of luma samples, where N is an integer.

[0009] In another example aspect, another video processing method is disclosed. The method includes downsampling a set of samples of a luma block using a downsampling process, wherein a downsampling filter used in the downsampling process depends on positions of luma samples being downsampled to generate a set of downsampled samples of the luma block; determining, from the set of downsampled samples of the luma block, samples of a first chroma block using a linear model.

[0010] In another representative aspect, the disclosed technology may be used to provide a method for simplified cross-component prediction. This method includes receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component, predicting, using a linear model, a first set of samples of the at least one chroma component based on a second set of samples that is selected by sub-sampling samples of the at least one luma component, and processing, based on the first and second sets of samples, the bitstream representation to generate the current block.

[0011] In another representative aspect, the disclosed technology may be used to provide a method for simplified cross-component prediction. This method includes receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component, predicting, using a linear model, a first set of samples of the at least one chroma component based on a second set of samples that are neighboring samples and are used for an intra prediction mode of the at least one luma component, and processing, based on the first and second sets of samples, the bitstream representation to generate the current block.

[0012] In yet another representative aspect, the disclosed technology may be used to provide a method for simplified cross-component prediction. This method includes receiving a bitstream representation of a picture segment comprising a plurality of blocks, wherein the plurality of blocks comprises a current block, and wherein each of the plurality of blocks comprises a chroma component and a luma component, performing a predicting step on each of the plurality of blocks, and processing, based on the respective first and second sets of samples, the bitstream representation to generate the respective block of the plurality of blocks.

[0013] In yet another representative aspect, the disclosed technology may be used to provide a method for simplified cross-component prediction. This method includes receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component, performing, a predetermined number of times, a predicting step on the current block, generating a final first set of samples based on each of the predetermined number of first sets of samples, and processing, based on at least the final first set of samples, the bitstream representation to generate the current block.

[0014] In yet another representative aspect, the disclosed technology may be used to provide a method for simplified cross-component prediction. This method includes determining a dimension of a first video block; determining parameters regarding application of a cross component linear model (CCLM) prediction mode based on the determination of the dimension of the first video block; and performing further processing of the first video block using the CCLM prediction mode in accordance with the parameters.

[0015] In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.

[0016] In yet another representative aspect, a device that is configured or operable to perform the above-described method is disclosed. The device may include a processor that is

programmed to implement this method.

[0017] In yet another representative aspect, a video decoder apparatus may implement a method as described herein.

[0018] The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0019] FIG. 1 shows an example of locations of samples used for the derivation of the weights of the linear model used for cross-component prediction.

[0020] FIG. 2 shows an example of classifying neighboring samples into two groups.

[0021] FIG. 3A shows an example of a chroma sample and its corresponding luma samples.

[0022] FIG. 3B shows an example of down filtering for the cross-component linear model

(CCFM) in the Joint Exploration Model (JEM).

[0023] FIG. 4 shows an exemplary arrangement of the four luma samples corresponding to a single chroma sample.

[0024] FIGS. 5A and 5B shows an example of samples of a 4x4 chroma block with the neighboring samples, and the corresponding luma samples.

[0025] FIGS. 6A-6J show examples of CCFM without luma sample down filtering.

[0026] FIGS. 7A-7D show examples of CCFM only requiring neighboring luma samples used in normal intra-prediction.

[0027] FIGS. 8 A and 8B show examples of a coding unit (CU) at a boundary of a coding tree unit (CTU). [0028] FIG. 9 shows a flowchart of an example method for cross-component prediction in accordance with the disclosed technology.

[0029] FIG. 10 shows a flowchart of another example method for cross-component prediction in accordance with the disclosed technology.

[0030] FIG. 11 shows a flowchart of yet another example method for cross-component prediction in accordance with the disclosed technology.

[0031] FIG. 12 shows a flowchart of yet another example method for cross-component prediction in accordance with the disclosed technology.

[0032] FIG. 13 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.

[0033] FIGS. 14-18 are flowcharts of various methods of video processing.

[0034] FIG. 19A-19B depict down-sampled positions for luma samples inside and outside the current block. Suppose the current blocks size is WxH. FIG. 19A demonstrates the case when W above neighoburing samples and H left neighboring samples are involved. FIG. 19B demonstrates the case when 2W above neighoburing samples and 2H left neighboring samples are involved.

[0035] FIG. 20 is a block diagram of an example video processing system in which disclosed techniques may be implemented.

DETAILED DESCRIPTION

[0036] Due to the increasing demand of higher resolution video, video coding methods and techniques are ubiquitous in modern technology. Video codecs typically include an electronic circuit or software that compresses or decompresses digital video, and are continually being improved to provide higher coding efficiency. A video codec converts uncompressed video to a compressed format or vice versa. There are complex relationships between the video quality, the amount of data used to represent the video (determined by the bit rate), the complexity of the encoding and decoding algorithms, sensitivity to data losses and errors, ease of editing, random access, and end-to-end delay (latency). The compressed format usually conforms to a standard video compression specification, e.g., the High Efficiency Video Coding (HEVC) standard (also known as H.265 or MPEG-H Part 2), the Versatile Video Coding (VVC) standard to be finalized, or other current and/or future video coding standards. [0037] Embodiments of the disclosed technology may be applied to existing video coding standards (e.g., HEVC, H.265) and future standards to improve runtime performance. Section headings are used in the present document to improve readability of the description and do not in any way limit the discussion or the embodiments (and/or implementations) to the respective sections only.

1 Embodiments of cross-component prediction

[0038] Cross-component prediction is a form of the chroma-to-luma prediction approach that has a well-balanced trade-off between complexity and compression efficiency improvement.

1.1 Examples of the cross-component linear model (CCLM)

[0039] In some embodiments, and to reduce the cross-component redundancy, a cross component linear model (CCLM) prediction mode (also referred to as LM), is used in the JEM, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:

[0040] pred c (i,j ) = a rec L ' i,j) + b (1)

[0041] Here, pred c {i,j ) represents the predicted chroma samples in a CU and rec L ' i,j ) represents the downsampled reconstructed luma samples of the same CU for color formats 4:2:0 or 4:2:2 while rec L ' i,j ) represents the reconstructed luma samples of the same CU for color format 4:4:4. CCLM parameters a and b are derived by minimizing the regression error between the neighboring reconstructed luma and chroma samples around the current block as follows:

[0044] Here, L(n ) represents the down-sampled (for color formats 4:2:0 or 4:2:2) or original

(for color format 4:4:4) top and left neighboring reconstructed luma samples, C(n ) represents the top and left neighboring reconstructed chroma samples, and value of N is equal to twice of the minimum of width and height of the current chroma coding block.

[0045] In some embodiments, and for a coding block with a square shape, the above two equations are applied directly. In other embodiments, and for a non-square coding block, the neighboring samples of the longer boundary are first subsampled to have the same number of samples as for the shorter boundary. FIG. 1 shows the location of the left and above

reconstructed samples and the sample of the current block involved in the CCLM mode. [0046] In some embodiments, this regression error minimization computation is performed as part of the decoding process, not just as an encoder search operation, so no syntax is used to convey the a and b values.

[0047] In some embodiments, the CCLM prediction mode also includes prediction between the two chroma components, e.g., the Cr (red-difference) component is predicted from the Cb (blue- difference) component. Instead of using the reconstructed sample signal, the CCLM Cb-to- Cr prediction is applied in residual domain. This is implemented by adding a weighted reconstructed Cb residual to the original Cr intra prediction to form the final Cr prediction:

[0048] pred C * r {i,j ) = pred Cr (i,j ) + a resi ch '{i,j ) (4)

[0049] Here, resi cb '(i,j ) presents the reconstructed Cb residue sample at position (i,j).

[0050] In some embodiments, the scaling factor a may be derived in a similar way as in the CCLM luma-to-chroma prediction. The only difference is an addition of a regression cost relative to a default a value in the error function so that the derived scaling factor is biased towards a default value of -0.5 as follows:

[0052] Here, Cb(n ) represents the neighboring reconstructed Cb samples, Cr(n ) represents the neighboring reconstructed Cr samples, and l is equal to · Cb(n)) » 9.

[0053] In some embodiments, the CCLM luma-to-chroma prediction mode is added as one additional chroma intra prediction mode. At the encoder side, one more RD cost check for the chroma components is added for selecting the chroma intra prediction mode. When intra prediction modes other than the CCLM luma-to-chroma prediction mode is used for the chroma components of a CU, CCLM Cb-to-Cr prediction is used for Cr component prediction.

1.2 Examples of multiple model CCLM

[0054] In the JEM, there are two CCLM modes: the single model CCLM mode and the multiple model CCLM mode (MMLM). As indicated by the name, the single model CCLM mode employs one linear model for predicting the chroma samples from the luma samples for the whole CU, while in MMLM, there can be two models.

[0055] In MMLM, neighboring luma samples and neighboring chroma samples of the current block are classified into two groups, each group is used as a training set to derive a linear model (i.e., a particular a and b are derived for a particular group). Furthermore, the samples of the current luma block are also classified based on the same rule for the classification of neighboring luma samples.

[0056] FIG. 2 shows an example of classifying the neighboring samples into two groups. Threshold is calculated as the average value of the neighboring reconstructed luma samples. A neighboring sample with Rec'i\x,y] <= Threshold is classified into group 1; while a neighboring sample with Rec'i[x,y] > Threshold is classified into group 2.

(Pred c [x, y] = a x Rec L ' [x, y] + b if Rec L ' [x, y] < Threshold

[0057]

\Pred c [x, y ] = a 2 x Rec[[x, y] + b 2 if Rec L ' [x, y] > Threshold ’

1.3 Examples of downsampling filters in CCLM

[0058] In some embodiments, to perform cross- component prediction, for the 4:2:0 chroma format, where 4 luma samples corresponds to 1 chroma samples, the reconstructed luma block needs to be downsampled to match the size of the chroma signal. The default downsampling filter used in CCLM mode is as follows:

[0060] Here, the downsampling assumes the“type 0” phase relationship as shown in FIG. 3 A for the positions of the chroma samples relative to the positions of the luma samples, e.g., collocated sampling horizontally and interstitial sampling vertically.

[0061] The exemplary 6-tap downsampling filter defined in (6) is used as the default filter for both the single model CCLM mode and the multiple model CCLM mode.

[0062] In some embodiments, and for the MMLM mode, the encoder can alternatively select one of four additional luma downsampling filters to be applied for prediction in a CU, and send a filter index to indicate which of these is used. The four selectable luma downsampling filters for the MMLM mode, as shown in FIG. 3B, are as follows:

[0066] Rec' L [x,y\ = ( Rec L [ 2x,2y ] +Rec L [2c,2 +ΐ] +Rec L [2x+l,2 ] +Rec L [2x+i,2y+l]+2) » 2 (11)

[0067] Extension of Multiple model CCLM in JVET-M0098 [0068] In JVET-M0098, MM-CCLM is extended. Two additional modes named CCLM-L and CCLM-T mode are added to the original MM-CCLM, named CCLM-TL. With CCLM-L, the linear parameters of the two models are derived only with the left neighbouring samples. With the CCLM-T, the linear parameters of the two models are derived only with the top neighbouring samples.

2 Examples of drawbacks in existing implementations

[0069] The current CCLM implementations do not readily lend themselves to efficient hardware implementations, due to at least the following issues:

[0070] • More neighboring luma samples than what are used in normal intra-prediction are required. CCLM requires two above neighboring rows of luma samples and three left neighboring columns of luma samples. MM-CCLM requires four above neighboring rows of luma samples and four left neighboring columns of luma samples.

[0071] • Luma samples (for both neighboring luma samples used for parameter derivation and co-located luma reconstructed samples) need to be downsampled with a 6-tap filter which increases computational complexity.

3 Exemplary methods for simplified cross-component prediction in video coding

[0072] Embodiments of the presently disclosed technology overcome drawbacks of existing implementations, thereby providing video coding with higher coding efficiencies but lower computational complexity. Simplified cross-component prediction, based on the disclosed technology, may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations. The examples of the disclosed technology provided below explain general concepts, and are not meant to be interpreted as limiting. In an example, unless explicitly indicated to the contrary, the various features described in these examples may be combined.

[0073] The proposed simplified CCLM methods include, but are not limited to:

[0074] · Only requiring neighboring luma samples used in normal intra-prediction; and

[0075] • Not needing to downsample the luma samples, or downsampling is performed by a simple two-sample averaging.

[0076] The examples described below assume that the color format is 4:2:0. As shown in FIG. 3 A, one chroma (Cb or Cr) sample (represented by a triangle) corresponds to four luma (Y) samples (represented by circles): A, B, C and D as shown in FIG. 4. FIGS. 5A and 5B shows an example of samples of a 4x4 chroma block with neighboring samples, and the corresponding luma samples.

[0077] Example 1. In one example, it is proposed that CCLM is done without down- sampling filtering on luma samples.

[0078] (a) In one example, down-sampling process of neighboring luma samples is removed in the CCLM parameter (e.g., a and b) derivation process. Instead, the down-sampling process is replaced by sub-sampling process wherein non-consecutive luma samples are utilized.

[0079] (b) In one example, down-sampling process of samples in the co-located luma block is removed in the CCLM chroma prediction process. Instead, only partial luma samples in the co-located luma block is used to derive the prediction block of chroma samples.

[0080] (c) FIGS. 6A-6J show examples on an 8x8 luma block corresponding to a 4x4 chroma block.

[0081] (d) In one example as shown in FIG. 6A, Luma sample at position“C” in FIG. 4 is used to correspond to the chroma sample. The above neighbouring samples are used in the training process to derive the linear model.

[0082] (e) In one example as shown in FIG. 6B, luma sample at position“C” in FIG. 4 is used to correspond to the chroma sample. The above neighboring samples and above-right neighboring samples are used in the training process to derive the linear model.

[0083] (f) In one example as shown in FIG. 6C, luma sample at position“D” in FIG. 4 is used to correspond to the chroma sample. The above neighboring samples are used in the training process to derive the linear model.

[0084] (g) In one example as shown in FIG. 6D, luma sample at position“D” in FIG. 4 is used to correspond to the chroma sample. The above neighboring samples and above-right neighboring samples are used in the training process to derive the linear model.

[0085] (h) In one example as shown in FIG. 6E, luma sample at position“B” in FIG. 4 is used to correspond to the chroma sample. The left neighboring samples are used in the training process to derive the linear model.

[0086] (i) In one example as shown in FIG. 6F, luma sample at position“B” in FIG. 4 is used to correspond to the chroma sample. The left neighboring samples and left-bottom neighboring samples are used in the training process to derive the linear model.

[0087] (j) In one example as shown in FIG. 6G, luma sample at position“D” in FIG. 4 is used to correspond to the chroma sample. The left neighboring samples are used in the training process to derive the linear model.

[0088] (k) In one example as shown in FIG. 6H, luma sample at position“D” in FIG. 4 is used to correspond to the chroma sample. The left neighboring samples and left-bottom neighboring samples are used in the training process to derive the linear model.

[0089] (1) In one example as shown in FIG. 61, luma sample at position“D” in FIG. 4 is used to correspond to the chroma sample. The above neighboring samples and left neighboring samples are used in the training process to derive the linear model.

[0090] (m) In one example as shown in FIG. 6J, luma sample at position“D” in FIG. 4 is used to correspond to the chroma sample. The above neighboring samples, left neighboring samples, above-right neighboring samples and left-bottom neighboring samples are used in the training process to derive the linear model.

[0091] Example 2. In one example, it is proposed that CCLM only requires neighbouring luma samples which are used in normal intra-prediction process, that is, other neighboring luma samples are disallowed to be used in the CCLM process. In one example, CCLM is done with 2- tap filtering on luma samples. FIGS. 7A-7D show examples on an 8x8 luma block

corresponding to a 4x4 chroma block.

[0092] (a) In one example as shown in FIG. 7A, luma sample at position“C” and position“D” in FIG. 4 are filtered as F(C, D) to be used to correspond to the chroma sample. The above neighboring samples are used in the training process to derive the linear model.

[0093] (b) In one example as shown in FIG. 7B, luma sample at position“C” and position“D” in FIG. 4 are filtered as F(C, D) to be used to correspond to the chroma sample. The above neighboring samples and above-right neighboring samples are used in the training process to derive the linear model.

[0094] (c) In one example as shown in FIG. 7C, luma sample at position“B” and position“D” in FIG. 4 are filtered as F(B, D) to be used to correspond to the chroma sample. The left neighboring samples are used in the training process to derive the linear model.

[0095] (d) In one example as shown in FIG. 7D, luma sample at position“B” and position“D” in FIG. 4 are filtered as F(B, D) to be used to correspond to the chroma sample. The left neighboring samples and left-bottom neighboring samples are used in the training process to derive the linear model. [0096] (e) In one example, F is defined as F(X,Y)=(X+Y)»l . Alternatively,

F(X,Y)=(X+Y+l)»l .

[0097] Example 3. In one example, it is proposed that the proposed simplified CCLM methods (e.g., Examples 1 and 2) can be applied in a selective way. That is, different block within a region/slice/picture/sequence may choose different kinds of simplified CCLM methods.

[0098] (a) In one embodiment, the encoder selects one kind of simplified CCLM method from a predefined candidate set and signals it to the decoder.

[0099] (i) For example, the encoder can select between Example 1(a) and

Example 1(e). Alternatively, it can select between Example 1(b) and Example 1(f). Alternatively, it can select between Example 1(c) and Example 1(g). Alternatively, it can select between Example 1(d) and Example 1(h). Alternatively, it can select between Example 2(a) and Example 2(c). Alternatively, it can select between Example 2(b) and Example 2(d).

[00100] (ii) The candidate set to be selected from and the signaling may depend on the shape or size of the block. Suppose W and H represent the width and height of the chroma block, Tl, and T2 are integers.

[00101] (1) In one example, if W<=Tl and H<=T2, there is no candidate, e.g., CCLM is disabled. For example, Tl=T2=2.

[00102] (2) In one example, if W<=Tl or H<=T2, there is no candidate, e.g., CCLM is disabled. For example, Tl=T2=2.

[00103] (3) In one example, if WxH<=Tl, there is no candidate, e.g., CCLM is disabled.

For example, Tl= 4.

[00104] (4) In one example, if W<=Tl and H<=T2, there is only one candidate such as

Example l(i). No CCLM method selection information is signaled. For example, Tl=T2=4.

[00105] (5) In one example, if W<=Tl or H<=T2, there is only one candidate such as

Example l(i). No CCLM method selection information is signaled. For example, Tl=T2=4.

[00106] (6) In one example, if WxH<=Tl, there is only one candidate such as Example l(i). No CCLM method selection information is signaled. For example, Tl= 16.

[00107] (7) In one example, if W> H, there is only one candidate such as Example 1(a).

No CCLM method selection information is signaled. Alternatively, if W > H (or W > N * H wherein N is a positive integer), only candidates (or some candidates) using above or/and above right neighboring reconstructed samples in deriving CCLM parameters are included in the candidate set.

[00108] (8) In one example, if W< H, there is only one candidate such as Example 1(e).

No CCLM method selection information is signaled. Alternatively, if W < H (or N * W < H), only candidates (or some candidates) using left or/and left-bottom neighboring reconstructed samples in deriving CCLM parameters are included in the candidate set.

[00109] (b) In one embodiment, both the encoder and decoder select a simplified CCLM method based the same rule. The encoder does not signal it to the decoder. Lor example, the selection may depend on the shape or size of the block. In one example, if the width is larger than the height, Example 1(a) is selected, otherwise, Example 1(e) is selected.

[00110] (c) One or multiple sets of simplified CCLM candidates may be signaled in sequence parameter set/picture parameter set/slice header/CTUs/CTBs/groups of CTUs.

[00111] Example 4. In one example, it is proposed that multiple CCLM methods (e.g., Examples 1 and 2) may be applied to the same chroma block. That is, one block within a region/slice/picture/sequence may choose different kinds of simplified CCLM methods to derive multiple intermediate chroma prediction blocks and the final chroma prediction block is derived from the multiple intermediate chroma prediction blocks.

[00112] (a) Alternatively, multiple sets of CCLM parameters (e.g., a and b ) may be firstly derived from multiple selected CCLM methods. One final set of CCLM parameters may be derived from the multiple sets and utilized for chroma prediction block generation process.

[00113] (b) The selection of multiple CCLM methods may be signaled (implicitly or explicitly) in a similar way as described in Example 3.

[00114] (c) Indication of the usage of the proposed method may be signaled in sequence parameter set/picture parameter set/slice header/groups of CTUs/CTUs/coding blocks.

[00115] Example 5. In one example, whether and how to apply the proposed simplified CCLM methods may depend on the position of the current block.

[00116] (a) In one example, one or more of the proposed methods is applied on CUs that locate at the top boundary of the current CTU as shown in LIG. 8A.

[00117] (b) In one example, one or more of the proposed methods is applied on CUs that locate at the left boundary of the current CTU as shown in LIG. 8B.

[00118] (c) In one example, one or more of the proposed methods is applied in both the above cases. [00119] Example 6. In one example, luma samples are down-sampled to correspond to chroma samples in different ways when they are inside the current block or outside the current block. Furthermore, outside luma samples down-sampled to correspond to chroma samples in different ways when they are left to the current block or above to the current block.

[00120] a. In one example, luma samples are down-sampled as specified below as shown in Fig. 19A-19B:

[00121] i. Luma samples inside the current block are down-sampled in the same way as in JEM.

[00122] ii. Luma samples outside the current block and above to the current block are down- sampled to position C or D.

[00123] 1. Alternatively, Luma samples are down-sampled to position C with a filter.

Suppose the luma samples above adjacent to the current block are denoted as a[i], then d[i]=(a[2i-l]+2*a[2i]+a[2i+l]+2)»2, where d[i] represents the down-sampled luma samples.

[00124] a. If the sample a[2i-l] is unavailable, d[i]=( 3*a[2i]+a[2i+l]+2)»2;

[00125] b. If the sample a[2i+l] is unavailable, d[i]=(a[2i-l]+ 3*a[2i]+2)»2;

[00126] iii. Luma samples outside the current block and left to the current block are down- sampled to position B or D,

[00127] 1. Alternatively, Luma samples are down-sampled to the half position between B and D. Suppose the luma samples left adjacent to the current block are denoted as a[j], then d[j]=( a [2j]+a[2j+l]+l)»l, where d[j] represents the down-sampled luma samples.

[00128] Example 7. In one example, the proposed luma down-sampled method can be applied to LM mode in JEM or VTM, and it can also be applied to MMLM mode in JEM. Besides, it can also be applied to left-LM mode which only uses left neighbouring samples to derive the linear model, or the above-LM mode, which only uses above neighbouring samples to derive the linear model.

[00129] Example 8. Boundary filtering can be applied to LM mode, MMLM mode, left-LM mode or above-LM mode, no matter what kind of down-sampling filter is applied.

[00130] a. Suppose the reconstructed chroma samples above adjacent to the current block are denoted as a[-l][j], the LM predicted sample at the ith row and jth column is a[i][j], then the prediction sample after boundary filtering is calculated as a’[i][j]=(wl * a[i][j]+w2* a[-l][i] + 2N-l)»N, where wl+w2=2N. [00131] i. In one example, the boundary filtering is only applied if i<=K. K is an integer such as 0 or 1. For example, K = 0, wl=w2=l . In another example, K=0, wl=3, w2=l .

[00132] ii. In one example, wl and w2 depend on the row index (i). For example, K=l, wl=w2=l for samples a[0][j], but wl=3 and w2=l for samples a[l][j].

[00133] b. Suppose the reconstructed chroma samples left adjacent to the current block are denoted as a[i][-l], the LM predicted sample at the ith row and jth column is a[i] [j], then the prediction sample after boundary filtering is calculated as a’[i][j]=(wl * a[i][j]+w2* a[i][-l] + 2N-l)»N, where wl+w2=2N.

[00134] i. In one example, the boundary filtering is only applied if j<=K. K is an integer such as 0 or 1. For example, K = 0, wl=w2=l . In another example, K=0, wl=3, w2=l .

[00135] ii. In one example, wl and w2 depend on the column index (i). For example, K=l, wl=w2=l for samples a[0][j], but wl=3 and w2=l for samples a[l][j].

[00136] FIG. 9 shows a flowchart of an exemplary method for cross-component prediction. The method 900 includes, at step 910, receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component.

[00137] The method 900 includes, at step 920, predicting, using a linear model, a first set of samples of the at least one chroma component based on a second set of samples that is selected by sub-sampling samples of the at least one luma component.

[00138] The method 900 includes, at step 930, processing, based on the first and second sets of samples, the bitstream representation to generate the current block.

[00139] In some embodiments, the method 900 further includes deriving, based on a training samples, parameters of the linear model. For example, and in the context of Example 1, the training samples comprise one or more samples of a block of luma samples associated with a chroma sample that is a neighbor of the current block. In some embodiments, the block of luma samples is a 2x2 block of luma samples. In other embodiments, it may be of size 2 N x2 N .

[00140] In some embodiments, at least one of the luma samples corresponds to the chroma sample, and the training samples may be selected as described in Examples l(d)-l(m). For example, in some embodiments, the block of luma samples is a 2x2 block of luma samples. In some embodiments, a lower left sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise an upper neighboring sample of the lower left sample. In some embodiments, a lower left sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise an upper neighboring sample and an upper right neighboring sample of the lower left sample. In some embodiments, a lower right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise an upper neighboring sample and an upper right neighboring sample of the lower right sample. In some embodiments, an upper right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise a left neighboring sample of the upper right sample. In some embodiments, an upper right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise a left neighboring sample and a lower left neighboring sample of the upper right sample. In some embodiments, a lower right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise a left neighboring sample of the lower right sample. In some embodiments, a lower right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise a left neighboring sample and a lower left neighboring sample of the lower right sample. In some embodiments, a lower right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise a left neighboring sample and an upper neighboring sample of the lower right sample. In some embodiments, a lower right sample of the block of luma samples corresponds to the chroma sample, and the training samples comprise an upper neighboring sample, a left neighboring sample, an upper right neighboring sample and a lower left neighboring sample of the lower right sample.

[00141] FIG. 10 shows a flowchart of another exemplary method for cross-component prediction. This example includes some features and/or steps that are similar to those shown in FIG. 9, and described above. At least some of these features and/or components may not be separately described in this section. The method 1000 includes, at step 1010, receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component.

[00142] The method 1000 includes, at step 1020, predicting, using a linear model, a first set of samples of the at least one chroma component based on a second set of samples that are neighboring samples and are used for an intra prediction mode of the at least one luma component.

[00143] The method 1000 includes, at step 1030, processing, based on the first and second sets of samples, the bitstream representation to generate the current block. [00144] In some embodiments, the method 1000 further includes deriving, based on training samples, parameters of the linear model. For example, and in the context of Example 2, the training samples comprise one or more samples of a block of luma samples associated with a chroma sample that is a neighbor of the current block. In some embodiments, the block of luma samples is a 2x2 block of luma samples. In other embodiments, it may be of size 2 N x2 N .

[00145] In some embodiments, a filtered sample based on two of the luma samples corresponds to the chroma sample, and the training samples may be selected as described in Examples 2(a)-2(d). For example, in some embodiments, the block of luma samples is a 2x2 block of luma samples. In some embodiments, a lower left sample and a lower right sample of the block of luma samples are filtered using a two-tap filter to generate a filtered sample that corresponds to the chroma sample, and wherein the training samples comprise an upper neighboring sample to each of the lower left sample and the lower right sample. In some embodiments, a lower left sample and a lower right sample of the block of luma samples are filtered using a two-tap filter to generate a filtered sample that corresponds to the chroma sample, and wherein the training samples comprise an upper neighboring sample and an upper right neighboring sample to each of the lower left sample and the lower right sample. In some embodiments, an upper right sample and a lower right sample of the block of luma samples are filtered using a two-tap filter to generate a filtered sample that corresponds to the chroma sample, and wherein the training samples comprise a left neighboring sample to each of the upper right sample and the lower right sample. In some embodiments, an upper right sample and a lower right sample of the block of luma samples are filtered using a two-tap filter to generate a filtered sample that corresponds to the chroma sample, and wherein the training samples comprise a left neighboring sample and a lower left neighboring sample to each of the upper right sample and the lower right sample.

[00146] FIG. 11 shows a flowchart of another exemplary method for cross-component prediction. This example includes some features and/or steps that are similar to those shown in FIGS. 9 and 10, and described above. At least some of these features and/or components may not be separately described in this section. The method 1100 includes, at step 1110, receiving a bitstream representation of a picture segment comprising a plurality of blocks, the plurality of blocks comprising a current block, and each of the plurality of blocks comprising a chroma component and a luma component. [00147] The method 1100 includes, at step 1120, performing a predicting step on each of the plurality of blocks.

[00148] The method 1100 includes, at step 1130, processing, based on the respective first and second sets of samples, the bitstream representation to generate the respective block of the plurality of blocks.

[00149] In some embodiments, and in the context of Example 3, the predicting step may be selected from the predicting step described in method 900, whereas in other embodiments, the predicting step may be selected from the predicting step described in method 1000.

[00150] FIG. 12 shows a flowchart of another exemplary method for cross-component prediction. This example includes some features and/or steps that are similar to those shown in FIGS. 9-11, and described above. At least some of these features and/or components may not be separately described in this section. The method 1200 includes, at step 1210, receiving a bitstream representation of a current block of video data comprising at least one luma component and at least one chroma component.

[00151] The method 1200 includes, at step 1220, performing, a predetermined number of times, a predicting step on the current block.

[00152] The method 1200 includes, at step 1230, generating a final first set of samples based on each of the predetermined number of first sets of samples.

[00153] The method 1200 includes, at step 1240, processing, based on at least the final first set of samples, the bitstream representation to generate the current block.

[00154] In some embodiments, and in the context of Example 4, the predicting step may be selected from the predicting step described in method 900, whereas in other embodiments, the predicting step may be selected from the predicting step described in method 1000.

[00155] In some embodiments, and in the context of Example 5, the performing the predicting step is based on a position of the current block in a current CTU. In one example, the position of the current block is at a top boundary, whereas in another example, the position of the current block is at a left boundary of the current CTU.

[00156] In some embodiments, a method of video coding includes determining a dimension of a first video block, determining parameters regarding application of a CCLM prediction mode based on the determination of the dimension, and performing further processing of the first video block using the CCLM prediction mode in accordance with the parameters. In various embodiments, the CCLM mode may include one or more of CCLM-TL, CCLM-T, or CCLM-L.

[00157] In some embodiments, the CCLM prediction mode excludes CCLM-T based on a width of the dimension being less than or equal to a threshold value.

[00158] In some embodiments, the CCLM prediction mode excludes CCLM-L based on a height of the dimension being less than or equal to a threshold value.

[00159] In some embodiments, the CCLM prediction mode excludes CCLM-TL based on a height of the dimension being less than a first threshold value, and based on a width of the dimension being less than a second threshold value.

[00160] In some embodiments, the CCLM prediction mode excludes CCLM-TL based on a width of the dimension multiplied by a height of the dimension being less than or equal to a threshold value.

[00161] In some embodiments, a flag to signal that the CCLM prediction mode is CCLM-T is not signaled and inferred to be 0 based on CCLM-T not being able to be applied.

[00162] In some embodiments, a flag to signal that the CCLM prediction mode is CCLM-L is not signaled and inferred to be 0 based on CCLM-L not being able to be applied.

[00163] In some embodiments, a flag to signal that the CCLM prediction mode is CCLM-TL is not signaled and inferred to be 0 based on CCLM-TL not being able to be applied.

[00164] In some embodiments, the method further includes: determining a color format of the first video block, and wherein determining the parameters is based on the determination of the color format.

[00165] In some embodiments, the CCLM prediction mode is CCLM-T, and CCLM-T uses above reference samples to derive linear model parameters.

[00166] In some embodiments, the CCLM prediction mode is CCLM-L, and CCLM-L uses left reference samples to derive linear model parameters.

[00167] In some embodiments, the CCLM prediction mode is CCLM-T, and CCLM-T is used to derive multiple linear models from above reference samples.

[00168] In some embodiments, the CCLM prediction mode is CCLM-L, and CCLM-L is used to derive multiple linear models from left reference samples.

[00169] The following listing of technical solutions may be preferred implementations in some embodiments.

[00170] 1. A method for video processing (e.g., method 1400 shown in FIG. 14), comprising: determining (1402), using training samples of a luma block, a linear model used for predicting a first chroma block; and determining (1404), using a set of samples of the luma block and the linear model, samples of the first chroma block; wherein the training samples or the set of samples are determined without using a multi-tap downsampling filter.

[00171] 2. The method of solution 1, wherein the linear model comprises a first parameter, alpha, and a second parameter, beta, and wherein the alpha and beta are derived from the training samples by sub-sampling training samples.

[00172] 3. The method of solution 2, wherein the sub-sampling uses non-consecutive luma samples.

[00173] 4. The method of solution 1, wherein the set of samples correspond to luma samples co-located with the samples of the first chroma block.

[00174] 5. The method of any of solutions 1-4, wherein the training samples corresponds to neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block.

[00175] 6. The method of any of solutions 1 -4, wherein the training samples corresponds to neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block.

[00176] 7. The method of any of solutions 1 -4, wherein the training samples corresponds to neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

[00177] 8. The method of any of solutions 1 -4, wherein the training samples corresponds to neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

[00178] 9. The method of any of solutions 1 -4, wherein the training samples corresponds to neighboring samples to left of the luma block and wherein the set of samples of the luma block corresponds to an upper right pixel of the luma block.

[00179] 10. The method of any of solutions 1 -4, wherein the training samples corresponds to neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a upper right pixel of the luma block. [00180] 11. The method of any of solutions 1 -4, wherein the training samples corresponds to neighboring samples to left of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

[00181] 12. The method of any of solutions 1-4, wherein the training samples corresponds to neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

[00182] 13. The method of any of solutions 1-4, wherein the training samples corresponds to neighboring samples above and left of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

[00183] 14. The method of any of solutions 1-4, wherein the training samples corresponds to neighboring samples above, left, above-right and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block.

[00184] The previous section provides additional features of the above solutions (e.g., item 1).

[00185] 15. A method for video processing (e.g., method 1500 shown in FIG. 15), comprising: determining (1502), using training samples of a luma block, a linear model used for predicting a first chroma block; and determining (1504), using a set of samples of the luma block and the linear model, samples of the first chroma block; wherein the training luma samples are limited to luma samples neighboring the luma block that are at positions used for an intra prediction process.

[00186] 16. The method of solution 15, wherein a chroma sample in the first chroma block is determined by applying a 2-tap filter to a first sample and a second sample in the set of samples of the luma block at a position corresponding to a position of the chroma block.

[00187] 17. The method of solution 16, wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above neighboring samples.

[00188] 18. The method of solution 16, wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above and above-right neighboring samples.

[00189] 19. The method of solution 16, wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left neighboring samples. [00190] 20. The method of solution 16, wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left and left-bottom neighboring samples.

[00191] 21. The method of any of solutions 16 to 20, wherein the two-tap filter is an averaging filter.

[00192] 22. The method of any of solutions 16 to 21, wherein the two-tap filter averages a sum of one plus the first sample plus the second sample.

[00193] The previous section provides additional features of the above solutions (e.g., item 1).

[00194] 23. A method of video processing (e.g., method 1600 depicted in FIG. 16), comprising: determining (1602), for a conversion between a current video block and a bitstream representation of the current video block, selectively based on a rule related to the current video block comprising a luma block and a first chroma block, a cross-component prediction scheme used for generating samples of the first chroma block from a set of samples of the luma block; and generating (1604) the samples of the first chroma block from the cross-component prediction scheme, wherein the cross-component prediction scheme is one of: a first cross-component prediction scheme that uses a linear model generated from training samples of the luma block such that the training samples and the set of samples are determined without using a multi-tap downsampling filter; or a second cross-component prediction scheme in which the training luma samples are limited to luma samples neighboring the current video block at positions used for an intra prediction process; wherein the rule specifies to select the cross-component prediction scheme depending on a video region to which the current video block belongs.

[00195] 24. The method of solution 23, wherein the cross-component prediction scheme selected for the current video block is different from another cross-component prediction scheme selected for another video block in the video region.

[00196] 25. The method of solution 24, wherein the video region comprises a video slice.

[00197] 26. The method of solution 24, wherein the video region comprises a video picture.

[00198] 27. The method of solution 24, wherein the video region comprises a sequence of video pictures.

[00199] 28. The method of solution 23, wherein the coding condition is signaled in the coded representation for the current video block.

[00200] 29. The method of solution 23, wherein the coding condition is not explicitly signaled in the coded representation for the current video block.

[00201] 30. The method of solution 28 or 29, wherein the rule specifies to select the cross component prediction scheme based on a shape or a size of the current video block.

[00202] 31. The method of any of solutions 28 or 29, wherein: the training samples are selected from:

[00203] neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block;

[00204] neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower left pixel of the luma block;

[00205] neighboring samples above the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block;

[00206] neighboring samples above and above-right of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block;

[00207] neighboring samples to left of the luma block and wherein the set of samples of the luma block corresponds to an upper right pixel of the luma block;

[00208] neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a upper right pixel of the luma block;

[00209] neighboring samples to left and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block;

[00210] neighboring samples above and left of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block; or

[00211] neighboring samples above, left, above-right and left-bottom of the luma block and wherein the set of samples of the luma block corresponds to a lower right pixel of the luma block; and/or

[00212] wherein the first chroma block is determined by applying a 2-tap filter to a first sample and a second sample in the set of samples of the luma block at a position corresponding to a position of the first chroma block; wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above neighboring samples; wherein the first sample is a lower left sample and the second sample is a lower right sample and wherein the training samples correspond to above and above-right neighboring samples; wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left neighboring samples; or wherein the first sample is an upper right sample and the second sample is a lower right sample and wherein the training samples correspond to left and left-bottom neighboring samples.

[00213] 32. The method of solution 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies disabling the cross-component prediction scheme due to W<=Tl and H<=T2, where Tl and T2 are integers.

[00214] 33. The method of solution 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies disabling the cross-component prediction scheme due to W*H <=Tl .

[00215] 34. The method of solution 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W<=Tl and H<=T2, where Tl and T2 are integers, and wherein the specific cross-component prediction scheme is signaled in the coded representation.

[00216] 35. The method of solution 30, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W<=Tl and H<=T2, where Tl and T2 are integers, and wherein the specific cross-component prediction scheme is not signaled in the coded representation.

[00217] 36. The method of solution 32, wherein the current video block has a width W pixels and a height H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W*H <=Tl, and wherein the specific cross-component prediction scheme is signaled in the coded representation.

[00218] 37. The method of solution 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W> N*H, and wherein the specific cross-component prediction scheme is not signaled in the code representation, where N is an integer.

[00219] 38. The method of solution 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W> N*H, and wherein the specific cross-component prediction scheme uses only pixels from above or above-right neighboring samples, and wherein the specific cross component prediction scheme is signaled in the coded representation.

[00220] 39. The method of solution 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to N*W < H, and wherein the specific cross-component prediction scheme is signaled in the code representation, where N is an integer.

[00221] 40. The method of solution 32, wherein the current video block has a width W pixels and a height of H pixels, and wherein the rule specifies selecting a specific cross-component prediction scheme due to W> N*H, and wherein the specific cross-component prediction scheme uses only pixels from left or left-bottom neighboring samples.

[00222] 41. The method of any of solutions 34 to 40, wherein Tl = 2.

[00223] 42. The method of any of solutions 10 to 17, wherein T2 = 2.

[00224] 43. The method of any of solutions 10 to 17, wherein Tl = 4.

[00225] 44. The method of any of solutions 10 to 17, wherein T2 = 4.

[00226] 45. The method of any of solutions 10 to 17, wherein T2 = 16.

[00227] The previous section provides additional features of the above solutions (e.g., item 3).

[00228] 46. A method of video processing (e.g., method 1700 depicted in FIG. 17), comprising: determining (1702), for a conversion between a video block of a video and a coded representation of the video block, samples of a first chroma block of the video block from a luma block of the video block; wherein the samples of the first chroma block correspond to a weighted combination of a first intermediate chroma block and N second intermediate chroma blocks, wherein the first intermediate chroma block is generated from a first set of samples of the luma block using a first cross-component linear model, wherein the first cross-component linear model is generated using a first training sequence of luma samples; and wherein the N second intermediate chroma blocks are generated from N second sets of samples of the luma block using N second cross-component linear models, wherein the N second cross-component linear models are generated using N second training sequences of luma samples, where N is an integer.

[00229] 47. The method of solution 46, wherein at least some of the first cross-component linear model and N second cross-component linear models are different from each other within a region of the video, where the region of video corresponds to a slice or a picture or a video sequence.

[00230] 48. The method of any of solutions 46-47, wherein a coded representation includes an identification of the first cross-component linear model and the N second cross-component linear models. [00231] 49. The method of solution 47, wherein the identification is included in a sequence parameter set level or a picture parameter set level or a slice header or a group of coding tree unit or a coding tree unit or a coding unit level.

[00232] 50. The method of any of solutions 46-49, wherein coded representation includes an indication of the first cross-component linear model and/or the N second cross-component linear models based on a rule that depends on a width W of the video block or a height H of the video block.

[00233] 51. The method of solution 50, wherein the indication is included at a sequence parameter set level or a picture parameter set level or a slice header level or a groups of coding tree units level or a coding tree unit level or a at a coding block level.

[00234] The previous section provides additional features of the above solutions (e.g., item 4).

[00235] 52. The method of any of solutions 1-51 wherein the method is selectively applied to the current video block based on an applicability rule related to the current video block satisfying a position condition within a current picture.

[00236] 53. The method of solution 52, wherein the applicability rule specifies to apply a method of above solutions due to the video block being located at a top boundary of a coding tree unit.

[00237] 54. The method of solution 52 or 53, wherein the applicability rule specifies to apply a method of above solutions due to the current video block being located at a left boundary of a coding tree unit.

[00238] The previous section provides additional features of the above solutions (e.g., item 5).

[00239] 55. A method of any of solutions 1-54, wherein the set of samples of the luma block is generated using a downsampling process wherein a first downsampling filter is used for downsampling luma samples inside the video block and a second downsampling filter is applied to luma samples outside the video block to generate the set of samples of the luma block.

[00240] 56. The method of solution 55, wherein the first downsampling filter corresponds to a legacy filter used by the Joint Exploration Model (JEM).

[00241] 57. The method of any of solutions 55-56, wherein the second downsampling filter downsamples luma samples above the video block to lower left and lower right positions.

[00242] 58. The method of solution 55, wherein luma samples above adjacent to the video block are denoted as a[i], then d[i]=(a[2i-l]+2*a[2i]+a[2i+l]+2)»2, where d[i] represents the down-sampled luma samples, where i is a variable representing horizontal sample offset.

[00243] 59. The method of solution 58, wherein, if the sample a[2i-l] is unavailable, the d[i] is determined as d[i]=( 3*a[2i]+a[2i+l]+2)»2.

[00244] 60. The method of solution 58, wherein if the sample a[2i+l] is unavailable, then d[i]=(a[2i-l]+ 3*a[2i]+2)»2.

[00245] 61. The method of any of solutions 55-56, wherein the second downsampling filter downsamples luma samples to outside left to top right and bottom right positions.

[00246] 62. The method of any of solutions 55-56, wherein the second downsampling filter downsamples luma samples to outside left to midway between top right and bottom right positions such that if left adjacent samples to the current block are denoted as a[j], then d[j]=( a [2j]+a[2j+l]+l)»l, where d[j] represents down-sampled luma samples.

[00247] The previous section provides additional features of the above solutions (e.g., item 5).

[00248] 63. A method of video processing (e.g., method 1800 depicted in FIG. 18), comprising: downsampling (1802) a set of samples of a luma block using a downsampling process, wherein a downsampling filter used in the downsampling process depends on positions of luma samples being downsampled to generate a set of downsampled samples of the luma block; and determining (1804), from the set of downsampled samples of the luma block, samples of a first chroma block using a linear model.

[00249] 64. The method of solution 63, wherein the linear model is defined by a first parameter alpha and a second parameter beta and wherein a chroma sample predc(i, j) at a position (i, j) is determined from a luma sample recL(i, j) as

[00250] predc(ij) = alpha*recL(i,j) + beta.

[00251] The previous section provides additional features of the above solutions (e.g., item 7).

[00252] 65. The method of any of solutions 63-64, further including applying a boundary filter the samples of the first chroma block that are a region of the current video block.

[00253] 66. The method of solution 65, wherein samples above adjacent to the video block are denoted as a[-l][j], samples in the ith row and jth column of the first chroma block is a[i] [j], then the applying the boundary filter is performed selectively based on a value if i, and comprises calculating a’[i][j]=(wl* a[i][j]+w2* a[-l][i] where wl+w2=2 N represent weights.

[00254] 67. The method of solution 66 wherein the applying the boundary filtering is performed only for i<=K. [00255] 68. The method of solution 66, wherein wl and w2 are a function of row index i.

[00256] 69. The method of solution 65, wherein left-adjacent samples to the video block are denoted as a[i[[-l], samples in the ith row and jth column of the first chroma block is a[i][j], then the applying the boundary filter is performed selectively based on a value if j, and comprises calculating a’[i][j]=(wl* a[i][j]+w2* a[i][-l] + 2 N_1 )»N, where wl+w2=2 N .

[00257] 70. The method of solution 69, wherein the applying the boundary filtering is performed only for j<=K.

[00258] 71. The method of solution 70, wherein wl and w2 are a function of column index j.

[00259] 72. The method of solution 67 or 70, wherein K = 0, wl = w2 = 1.

[00260] 73. The method of solution 67 or 70, wherein K = 0, wl = 3, w2 = 1.

[00261] The previous section provides additional features of the above solutions (e.g., item 8).

[00262] 74. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of solutions 1 to 73.

[00263] 75. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of solutions 1 to 73.

[00264] 76. A video decoding apparatus comprising a processor configured to implement a method recited in one or more of solutions 1 to 73.

[00265] 77. A video encoding apparatus comprising a processor configured to implement a method recited in one or more of solutions 1 to 73.

[00266] 78. A method, apparatus or system described in the present document.

[00267] 4 Example implementations of the disclosed technology

[00268] FIG. 13 is a block diagram of a video processing apparatus 1300. The apparatus 1300 may be used to implement one or more of the methods described herein. The apparatus 1300 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 1300 may include one or more processors 1302, one or more memories 1304 and video processing hardware 1306. The processor(s) 1302 may be configured to implement one or more methods (including, but not limited to, methods 900, 1000,... to 1800) described in the present document. The memory (memories) 1304 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 1306 may be used to implement, in hardware circuitry, some techniques described in the present document.

[00269] In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to FIG. 13.

[00270] FIG. 20 is a block diagram showing an example video processing system 2000 in which various techniques disclosed herein may be implemented. Various implementations may include some or all of the components of the system 2000. The system 2000 may include input 2002 for receiving video content. The video content may be received in a raw or uncompressed format, e.g., 8 or 10 bit multi-component pixel values, or may be in a compressed or encoded format. The input 2002 may represent a network interface, a peripheral bus interface, or a storage interface. Examples of network interface include wired interfaces such as Ethernet, passive optical network (PON), etc. and wireless interfaces such as Wi-Fi or cellular interfaces.

[00271] The system 2000 may include a coding component 2004 that may implement the various coding or encoding methods described in the present document. The coding component 2004 may reduce the average bitrate of video from the input 2002 to the output of the coding component 2004 to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component 2004 may be either stored, or transmitted via a communication connected, as represented by the component 2006. The stored or communicated bitstream (or coded) representation of the video received at the input 2002 may be used by the component 2008 for generating pixel values or displayable video that is sent to a display interface 2010. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.

[00272] Examples of a peripheral bus interface or a display interface may include universal serial bus (ETSB) or high definition multimedia interface (HDMI) or Displayport, and so on. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI,

IDE interface, and the like. The techniques described in the present document may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display.

[00273] Examples of simulation results

[00274] The various CCLM methods were implemented on VTM-2.0.1. For testing effectiveness of the proposed methods, two solutions were tested.

[00275] The two solutions differed in usage of different sets of neighbouring luma samples. Suppose the width and height of one block is denoted as W and H, respectively. In solution #1, W above neighbouring samples and H left neighbouring samples were involved in the training process for deriving alpha and beta parameters In solution #2, 2W above neighbouring samples and 2H left neighbouring samples were used for training.

[00276] Test #1 and test #2 are conducted for solution #1 and solution #2, respectively.

Simulation results of All Intra (AI) and Random Access (RA) configuration for Test 1 and Test 2 are summarized in Table 1 and Table 2, respectively.

[00277] Table 1 : Results for Test 1

[00278] Table 2: Results of Test 2

[00279] From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims. [00280] Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine- readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term“data processing unit” or“data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.

[00281] A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code).

A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a

communication network.

[00282] The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).

[00283] Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices.

Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.

[00284] It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the use of“or” is intended to include“and/or”, unless the context clearly indicates otherwise.

[00285] While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple

embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.

[00286] Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.

[00287] Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.