Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD AND APPARATUS FOR ENCODING AND DECODING A VIDEO
Document Type and Number:
WIPO Patent Application WO/2018/065511
Kind Code:
A1
Abstract:
Method and apparatus for encoding and decoding a video. A method and an apparatus for encoding a video are disclosed. Such a method comprises, for at least one block having a size N which is not a power of 2 along at least one dimension: - determining (40) a predicted block for said at least one block, - obtaining (41) a residual block from said at least one block and said predicted block, - performing (42) block transform of said residual block, said residual block having a size N, - encoding (43) said transformed residual block. Corresponding method and apparatus for decoding a video are also disclosed.

Inventors:
LELEANNEC FABRICE (FR)
POIRIER TANGI (FR)
VIELLARD THIERRY (FR)
Application Number:
PCT/EP2017/075326
Publication Date:
April 12, 2018
Filing Date:
October 05, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
THOMSON LICENSING (FR)
International Classes:
H04N19/60; H04N19/122; H04N19/147; H04N19/176
Foreign References:
US20100266008A12010-10-21
US20100172409A12010-07-08
Other References:
ANONYMOUS: "Butterfly diagram - Wikipedia", 7 April 2016 (2016-04-07), XP055363083, Retrieved from the Internet [retrieved on 20170407]
Attorney, Agent or Firm:
HUCHET, Anne et al. (FR)
Download PDF:
Claims:
Claims

1 . A method for encoding a video comprising, for at least one block having a size N which is not a power of 2 along at least one dimension:

- determining (40) a predicted block for said at least one block,

- obtaining (41 ) a residual block from said at least one block and said predicted block,

- performing (42) block transform of said residual block, said residual block having a size N,

- encoding (43) said transformed residual block,

wherein butterfly operations converting from a spatial domain to a transform domain a sample vector of size 3 are represented by: = xl

= (£1 + x2) x 43 (l,l),

t2 = E2 A3 (2,l),

t3 = E1 X A3 (3,l) - x2, where represents said sample vector of size 3 from said spatial domain, represents a resulting sample vector of size 3 from said transform domain, E1 and E2 represent intermediate values for butterfly design used for computing samples from said transform domain, A3 (k,j) represent corresponding values of said transform matrix.

2. The method according to claim 1 , wherein N is a multiple of 3.

3. The method for encoding according to claim 2, wherein performing block transform of said residual block comprises at least performing butterfly operations converting from a spatial domain to a transform domain a sample vector of size 3, wherein said butterfly operations implement a transform matrix of size 3x3, said sample vector comprising:

- samples of said residual block along said at least one dimension in the case where N equals 3, and

- linear combinations of samples of said residual block taken along said at least one dimension in the case where N is higher than 3.

4. A method according to any one of claims 1 to 3, wherein said block transform is based on a transform matrix AN represented by:

5. The method according to claim 4, said method further comprising, for iV > 3:

- performing butterfly operations converting from a spatial domain to a transform domain a sample vector of size N/2, wherein said butterfly operations implement a complementary matrix transform XN represented by:

ΛΝ — COS )k,je[0,N/2]

6. The method according to claim 5, wherein butterfly operations converting from a spatial domain to a transform domain a sample vector of size 6, comprises at least the following operations:

E1 = ¾(l,l) x ^ + ¾(3,l) x ¾,

E2 = X6 (2,l) x v2 ,

E3 = X6 (3,l) x v1 + X6(l,l) x v3,

ι½ = E-L + E2

u2 = E-L — E2— £3

u3 = -E2 + E3 where is obtained from said sample vector of size 6 from said spatial domain,

Ex , E2 and £3 represent intermediate values for butterfly design further used for computing transformed samples from said transformed residual block, X6 (k,j) represent corresponding values of the complementary matrix transform and is the resulting vector of samples in the transform domain.

7. The method according to claim 4, wherein, for TV > 3, a butterfly implementation of said matrix transform AN is based on a matrix Pl (AN) corresponding to a matrix wherein N/2 first lines of P; (AN) corresponds to odd lines of AN and N/2 last lines of P; (AN) corresponds to even lines of AN.

8. The method according to claim 5 or 7, wherein said matrix Pl (AN) is represented by: , where AN/2 represents a vertically flipped version

of the matrix AN/2 , and -XN represents an opposed vertically flipped version of said complementary matrix transform XN. 9. A method for decoding a video comprising, for at least one block having a size N which is not a power of 2 along at least one dimension:

- decoding (50) a transformed residual block,

-performing (51 ) inverse block transform of said transformed residual block, said residual block having a size N,

-determining (52) a predicted block for said at least one block,

- reconstructing (53) said at least one block from said inverse transformed residual block and said predicted block

wherein butterfly operations converting from a transform domain to a spatial domain a sample vector of size 3 are represented by:

Ei = t X S3(1,1),

E2 = t2 X S3 (1,2),

Es = t3 x $3 (1,3),

x1 = Ei + E2 + £3,

= Ei — t3,

*3 = E — E2 + E3, where represents a resulting sample vector of size 3 from said spatial domain, represents said sample vector of size 3 from said transform domain, E , E2 and E3 represent intermediate values for butterfly design used for computing samples from said spatial domain, S3 (/c,y) represent corresponding values of the transform matrix.

10. The method according to claim 9, wherein N is a multiple of 3.

1 1 . The method according to claim 10, wherein performing inverse block transform of said transformed residual block comprises at least performing butterfly operations converting from a transform domain to a spatial domain a sample vector of size 3, wherein said butterfly operations implement a transform matrix of size 3x3, said sample vector comprising:

- samples of said transformed residual block along said at least one dimension in the case where N equals 3, and - linear combinations of samples of said transformed residual block taken along said at least one dimension in the case where N is higher than 3.

12. A method according to any one of claims 9 to 1 1 , wherein said inverse block transform is based on a transform matrix SN represented by:

if k = 0,) (c(k) x cos ((2 X 2+x^xfc7r)) - fce [o,w-i] , with k an integer k≥ 0, c(fc)

if k > 0

13. The method according to claim 12, said method further comprising, for N > 3:

- performing butterfly operations converting from a transform domain to a spatial domain a sample vector of size N/2, wherein said butterfly operations implement a complementary matrix transform XN represented by:

ΛΝ — COS )k,je[0,N/2] ■

14. The method according to claim 13, wherein butterfly operations converting from a transform domain to a spatial domain a sample vector of size 6, comprises at least the following operations:

E = ¾(l,l) X Mi + ¾(3,l) x u3,

E2 = X6(2,l) x u2,

E3 = ¾(3,1) X + X6(l,l) u3,

vt = Et + E2

V2 = E-L — E2— £3

v3 = -E2 + E3 where represents said sample vector of size 3 from said transform domain, Et, E2 and E3

represent intermediate values for butterfly design further used for computing transformed samples from said transformed residual block, X6(k,j) represent corresponding values of the complementary matrix transform and resulting vector of samples in the spatial

domain.

15. The method according to any one of claims 5 and 13, wherein said butterfly operations implementing said matrix transform XN uses linear combinations of columns from said matrix transform XN.

16. The method according to claim 12, wherein, for Λ/ > 3, a butterfly implementation of said matrix transform SN is based on a matrix Pc (SN) corresponding to a matrix wherein N/2 first column of Pc ( SN) corresponds to even columns of SN and N/2 last columns of Pc ( SN) corresponds to odd columns of SN.

17. A method according to claim 13 or 16, wherein said matrix Pc (SN) is represented by: where SN/2 represents an horizontally flipped version the matrix SN/2 , and -XN represents an opposed horizontally flipped version complementary matrix transform XN.

18. An apparatus for encoding a video comprising, for at least one block having a size N which is not a power of 2 along at least one dimension:

- means for determining a predicted block for said at least one block,

- means for obtaining a residual block from said at least one block and said predicted block, - means for performing block transform of said residual block, said residual block having a size

N,

- means for encoding said transformed residual block,

wherein butterfly operations converting from a spatial domain to a transform domain a sample vector of size 3 are represented by:

E1 = x1 + x3 ,

E2 = x±— Xj ,

t = (E1 + X2) X ^3 (1,1),

t2 = E2 x .43 (2,1),

t3 = E1 x A3 (3,l) - x2, where represents said sample vector of size 3 from said spatial domain, represents a resulting sample vector of size 3 from said transform domain, Ex and E2 represent intermediate values for butterfly design used for computing samples from said transform domain, A3 (k,j) represent corresponding values of said transform matrix.

19. An apparatus for decoding a video comprising, for at least one block having a size N which is not a power of 2 along at least one dimension:

- means for decoding a transformed residual block, - means for performing inverse block transform of said transformed residual block, said residual block having a size N,

- means for determining a predicted block for said at least one block,

- means for reconstructing said at least one block from said inverse transformed residual block and said predicted block,

wherein butterfly operations converting from a spatial domain to a transform domain a sample vector of size 3 are represented by: xl ~

t = (E1 + x2) x A3 (l,l),

t2 = E2 X A3 (2,1),

t3 = E1 x A3 (3,l) - x2, where represents said sample vector of size 3 from said spatial domain, represents a resulting sample vector of size 3 from said transform domain, Ex and E2 represent intermediate values for butterfly design used for computing samples from said transform domain, A3 (k,j) represent corresponding values of said transform matrix.

20. A computer program comprising software code instructions for performing the method according to any one of claims 1 to 17, when the computer program is executed by a processor.

Description:
Method and apparatus for encoding and decoding a video.

1 . Technical field

A method and an apparatus for encoding a video into a bitstream are disclosed. Corresponding decoding method and apparatus are further disclosed.

2. Background

For coding a picture of a video sequence, video compression methods usually divide the picture into a set of blocks of pixels. Each block is then predicted using information already reconstructed, corresponding to the blocks previously encoded / decoded in the current picture. The coding of a current block is performed using an intra or inter prediction of the current block, and a prediction residual or "residual block" corresponding to a difference between the current block and the predicted block is computed. The resulting residual block is then converted, for example by using a transform such as a DCT (discrete cosine transform) type transform. The coefficients of the transformed residual block are then quantized and encoded by entropy coding and transmitted to a decoder.

In an HEVC video compression standard ( "ITU-T H.265 Telecommunication standardization sector of ITU (10/2014), series H: audiovisual and multimedia systems, infrastructure of audiovisual services - coding of moving video, High efficiency video coding, Recommendation ITU-T H.265"), a picture is divided into Coding Tree Units (CTU), which size may be 64x64, 128x128 or 256x256 pixels. Each CTU may be further subdivided using a quad-tree division, where each leaf of the quad-tree is called a Coding Unit (CU). Each CU is then given some Intra or Inter prediction parameters. To do so, a CU is spatially partitioned into one or more Prediction Units (PU), a PU may have a square or a rectangular shape. Each PU is assigned some prediction information, such as for example motion information, spatial intra prediction. According to the HEVC video compression standard, each CU may be further subdivided into Transform Units (TU) for performing the transform of the prediction residual. However, only square supports transform are defined in the HEVC video compression standard, as disclosed on figure 1 A. On figure 1 A, solid lines indicate CU boundaries and dotted lines indicate TU boundaries.

A Quad-Tree plus Binary-Tree (QTBT) coding tool ("Algorithm Description of Joint Exploration Test Model 3", Document JVET-C1001_v3, Joint Video Exploration Team of ISO/I EC JTC1/SC29/WG1 1, 3rd meeting, 26th May- 1st June 2015, Geneva, CH) provides a more flexible CTU representation than the CU/PU/TU arrangement of the HEVC standard. The Quad-Tree plus Binary-Tree (QTBT) coding tool consists in a coding tree where coding units can be split both in a quad-tree and in a binary-tree fashion. Such coding tree representation of a Coding Tree Unit is illustrated on Fig. 1 B, where solid lines indicate quad-tree partitioning and dotted lines indicate binary partitioning of a CU.

The splitting of a coding unit is decided on the encoder side through a rate distortion optimization procedure, which consists in determining the QTBT representation of the CTU with minimal rate distortion cost. In the QTBT representation, a CU has either a square or a rectangular shape. The size of coding unit is always a power of 2, and typically goes from 4 to 128. The QTBT decomposition of a CTU is made of two stages: first the CTU is split in a quadtree fashion, then each quad-tree leaf can be further divided in a binary fashion or in a quadtree fashion, as illustrated on fig. 1 C, where solid lines represent the quad-tree decomposition phase and dotted lines represent the binary decomposition that is spatially embedded in the quad-tree leaves. With the QTBT representation, a CU is not anymore partitioned into PU or TU. With the QTBT representation, the transform of the prediction residual is performed on blocks of size expressed as a power of 2 and existing separable transform and fast implementation of such transform usually used for square blocks can be re-used. However, such a QTBT representation does not allow for asymmetric splitting of a CU.

3. Summary

According to an aspect of the disclosure, a method for encoding a video is disclosed. Such a method comprises, for at least one block having a size N which is not a power of 2 along at least one dimension:

- determining a predicted block for said at least one block,

- obtaining a residual block from said at least one block and said predicted block,

- performing block transform of said residual block, said residual block having a size N,

- encoding said transformed residual block.

According to an embodiment, N is a multiple of 3.

According to another embodiment, performing block transform of said residual block comprises at least performing butterfly operations converting from a spatial domain to a transform domain a sample vector of size 3, wherein said butterfly operations implement a transform matrix of size 3x3, said sample vector comprising:

- samples of said residual block along said at least one dimension in the case where N equals 3, and

- linear combinations of samples of said residual block taken along said at least one dimension in the case where N is higher than 3.

According to another embodiment, said block transform is based on a transform matrix A N represented by:

An = J W x cos {^^^k e^N-i] , with k an integer k≥ 0, c(fc) =

^ h 1 1 if lf k k ~ 0 °'| ). According to a variant, said method further comprising, for TV > 3:

- performing butterfly operations converting from a spatial domain to a transform domain a sample vector of size N/2, wherein said butterfly operations implement a complementary matrix transform X N represented by:

Λ Ν — COS 2 N )k,je[0,N/2] ■

This embodiment allows providing a fast implementation of the transform when N is higher than 3. Therefore, computational resources are saved.

According to another embodiment, butterfly operations converting from a spatial domain to a transform domain a sample vector of size 3 are represented by:

^2 = x l ~ x 3 i

t = (E 1 + x 2 ) x A 3 (l,l),

t 2 = E 2 x . 3 (2,1),

t 3 = E 1 X A 3 (3,l) - x 2 , where represents said sample vector of size 3 from said spatial domain, represents a resulting sample vector of size 3 from said transform domain, E x and E 2 represent intermediate values for butterfly design used for computing samples from said transform domain, A 3 (k,j) represent corresponding values of said transform matrix.

Such an implementation allows reducing the number of multiplications needed for performing the transform of the residual block. Thus, complexity is reduced.

According to another embodiment, butterfly operations converting from a spatial domain to a transform domain a sample vector of size 6, comprises at least the following operations: E 1 = Z 6 (l,l) x ^ + 6 (3,l) x ^,

E 2 = Χ 6 (2,ϊ) ν 2 ,

E 3 = X 6 (3,l) x v 1 + X 6 (l,l) x v 3 , u2 = E 1 — E 2 — E 3

u 3 =—E 2 + E 3 where is obtained from said sample vector of size 6 from said spatial domain,

E t , E 2 and £ 3 represent intermediate values for butterfly design further used for computing transformed samples from said transformed residual block, X 6 (k,j) represent corresponding values of the complementary matrix transform and is the resulting vector of samples in

the transform domain.

According to another embodiment, for N > 3 , a butterfly implementation of said matrix transform A N is based on a matrix P ; (A N ) corresponding to a matrix wherein N/2 first lines of Pi (A N ) corresponds to odd lines of A N and N/2 last lines of P ; (A N ) corresponds to even lines of A N .

According to this embodiment, the butterfly implementation for transforming data of blocks having a size N in at least one dimension which is a multiple of 3 takes advantage of the symmetry which is present in the transform matrix A N . Therefore, a butterfly implementation of the transform matrix for a size N/2 can be re-used for the size N.

According trix P l (A N ) is represented by:

P l (A N ) = , where Α Ν/2 represents a vertically flipped version

of the matrix A N/2 , and -X N represents the opposite of the vertically flipped version of said complementary matrix transform X N .

According to this embodiment, it is thus possible to re-use the butterfly implementation designed for the matrix A N/2 .

According to another embodiment, the transform process through matrix P l (A N ) is performed as 2 sub-transforms A N/2 ard X N , respectively applied on sub-vectors derived from input spatial samples (x; + -

According to a further embodiment, the two sub-transforms are performed through butterfly operations of matrix A N/2 applied on a sub-vector > leading to a transformed sub-vector (b{) i=1 N/2 on one side, and butterfly operations of matrix X N applie > leading to a transformed sub- vector

According to a further embodiment, a final transform vector (t{) i=1 N is obtained as an interleaving of said transformed sub-vectors w =

(£>i) ΐ½) b 2 , u 2 , ... , b N j 2 , u W/ i 2 )

According to another aspect of the disclosure, a method for decoding a video is disclosed. Such a method comprises, for at least one block having a size N which is not a power of 2 along at least one dimension :

- decoding a transformed residual block, -performing inverse block transform of said transformed residual block, said residual block having a size N,

-determining a predicted block for said at least one block,

- reconstructing said at least one block from said inverse transformed residual block and said predicted block.

Thus, the present principle allows performing inverse transformation of a transformed residual block on a support of a same size as the support for prediction. Thus, asymmetric partitioning of blocks can be coupled to fast inverse transformation of data of such blocks, yielding to better compression efficiency and reducing computational complexity.

According to an embodiment, N is a multiple of 3.

According to another embodiment, performing inverse block transform of said transformed residual block comprises at least performing butterfly operations converting from a transform domain to a spatial domain a sample vector of size 3, wherein said butterfly operations implement a transform matrix of size 3x3, said sample vector comprising:

- samples of said transformed residual block along said at least one dimension in the case where N equals 3, and

- linear combinations of samples of said transformed residual block taken along said at least one dimension in the case where N is higher than 3.

According to another embodiment, said inverse block transform is based on a transform matrix S N represented by: with k an integer k≥ 0, c(/c) =

j= 2 if k = 0,

l if k > 0

According to another embodiment, butterfly operations converting from a transform domain to a spatial domain a sample vector of size 3 are represented by:

= X S 3 (1,1),

E 2 = t 2 5 3 (l,2),

Es = t 3 X S 3 (1,3),

*1 = Ei + E 2 + E 3 ,

*2 = Ei - t 3 ,

*3 = Ei - E 2 + E 3 , where represents a resulting sample vector of size 3 from said spatial domain, represents said sample vector of size 3 from said transform domain, E t , E 2 and E 3 represent intermediate values for butterfly design used for computing samples from said spatial domain, S 3 (j, k) represent corresponding values of the transform matrix.

Such an implementation allows reducing the number of multiplications needed for performing the inverse transform of the residual block. Thus, complexity is reduced.

According to another embodiment, said method further comprising, for iV > 3 :

- performing butterfly operations converting from a transform domain to a spatial domain a sample vector of size N/2, wherein said butterfly operations implement a complementary matrix transform X N represented by:

Λ Ν — COS 2N )k,je[0,N/2] ■

This embodiment allows providing a fast implementation of the inverse transform when N is higher than 3. Therefore, computational resources are saved.

According to another embodiment, butterfly operations converting from a transform domain to a spatial domain a sample vector of size 6, are represented by:

E 1 = X 6 (l,l) x ¾ + X 6 (3,1) x u 3 ,

E 2 = ¾ (2,l) x ¾ ,

E 3 = X 6 (3,l) X + X 6 (l,l) X u 3 ,

v 1 = E 1 + E 2 v 3 3 =— E: where represents said sample vector of size 3 from said transform domain, E 1 , E 2 and E 3

represent intermediate values for butterfly design further used for computing transformed samples from said transformed residual block, X 6 (k,j) represent corresponding values of the

complementary matrix transform and is the resulting vector of samples in the spatial domain.

According to another embodiment, said butterfly operations implementing said matrix transform X N uses linear combinations of columns from said matrix transform X N . This embodiment takes advantage of the properties of the matrix transform X N .

According to another embodiment, for N > 3 , a butterfly implementation of said matrix transform S N is based on a matrix P c (S N ) corresponding to a matrix wherein N/2 first column of P c (SJV) corresponds to odd columns of S N and N/2 last columns of P c (S N ) corresponds to even columns of S N .

According to this embodiment, the butterfly implementation for transforming data of blocks having a size N in at least one dimension which is a multiple of 3 takes advantage of the symmetry which is present in the transform matrix S N . Therefore, a butterfly implementation of the transform matrix for a size N/2 can be re-used for the size N.

According to another embodiment, said matrix P c (S N ) is represented by: , where S N/2 represents an horizontally flipped version

the matrix S N/2 , and = ¾ represents the opposite of the horizontally flipped version of said complementary matrix transform X N . According to this embodiment, it is thus possible to reuse the butterfly implementation designed for the matrices X N 2 and X N .

According to this embodiment, it is thus possible to re-use the butterfly implementation designed for the matrix S N/2 .

According to another embodiment, the transform process through matrix P c (S N ) is performed as 2 sub-transforms S N/2 and X N , respectively applied on sub-vectors derived from input transform samples (ti)i=i,..,w/ 2 and (t^ .jv .

According to a further embodiment, the two sub-transforms are performed through butterfly operations of matrix S N/2 applied on a sub-vector . leading to a sub-vector ( α 'ί) ί =ι,..,Ν/ 2 on one side, and butterfly operations of matrix X N applied on a sub-vector (ti) i = N + 1 N leading to a sub-vector (x'd i = 1 ,... N/2 .

According to a further embodiment, a final inverse transformed vector (xi)i=i w is obtained by recombining said sub-vectors N = ( α ί + χ ι) ( a 2 + x 2 ) ... (CL' + X' ) (a - x[) ( 2 - x 2 ) ... (a N ' /2 - x N ' /2 )

According to another aspect of the disclosure, an apparatus for encoding a video is disclosed. Such an apparatus comprises, for at least one block having a size N which is not a power of 2 along at least one dimension:

- means for determining a predicted block for said at least one block,

- means for obtaining a residual block from said at least one block and said predicted block,

- means for performing block transform of said residual block, said residual block having a size N,

- means for encoding said transformed residual block.

According to another aspect of the disclosure, an apparatus for decoding a video is also disclosed. Such an apparatus comprises, for at least one block having a size N which is not a power of 2 along at least one dimension: - means for decoding a transformed residual block,

- means for performing inverse block transform of said transformed residual block, said residual block having a size N,

- means for determining a predicted block for said at least one block,

- means for reconstructing said at least one block from said inverse transformed residual block and said predicted block.

The present disclosure also provides a computer readable storage medium having stored thereon instructions for encoding a video according to any one of the embodiments described in the disclosure.

The present disclosure also provides a computer readable storage medium having stored thereon instructions for decoding a video according to any one of the embodiments described in the disclosure.

According to one implementation, the different steps of the method for coding a video or decoding a video as described here above are implemented by one or more software programs or software module programs comprising software instructions intended for execution by a data processor of an apparatus for encoding/decoding a video, these software instructions being designed to command the execution of the different steps of the methods according to the present principles.

A computer program is also disclosed that is capable of being executed by a computer or by a data processor, this program comprising instructions to command the execution of the steps of a method for encoding a video or of the steps of a method for decoding a video as mentioned here above.

This program can use any programming language whatsoever and be in the form of source code, object code or intermediate code between source code and object code, such as in a partially compiled form or any other desirable form whatsoever.

The information carrier can be any entity or apparatus whatsoever capable of storing the program. For example, the carrier can comprise a storage means such as a ROM, for example a CD ROM or a microelectronic circuit ROM or again a magnetic recording means, for example a floppy disk or a hard disk drive.

Again, the information carrier can be a transmissible carrier such as an electrical or optical signal which can be conveyed via an electrical or optical cable, by radio or by other means. The program according to the present principles can be especially uploaded to an Internet type network. As an alternative, the information carrier can be an integrated circuit into which the program is incorporated, the circuit being adapted to executing or to being used in the execution of the methods in question.

According to one embodiment, the methods/apparatus may be implemented by means of software and/or hardware components. In this respect, the term "module" or "unit" can correspond in this document equally well to a software component and to a hardware component or to a set of hardware and software components.

A software component corresponds to one or more computer programs, one or more sub-programs of a program or more generally to any element of a program or a piece of software capable of implementing a function or a set of functions as described here below for the module concerned. Such a software component is executed by a data processor of a physical entity (terminal, server, etc) and is capable of accessing hardware resources of this physical entity (memories, recording media, communications buses, input/output electronic boards, user interfaces, etc).

In the same way, a hardware component corresponds to any element of a hardware unit capable of implementing a function or a set of functions as described here below for the module concerned. It can be a programmable hardware component or a component with an integrated processor for the execution of software, for example an integrated circuit, a smartcard, a memory card, an electronic board for the execution of firmware, etc,

4. Brief description of the drawings

Figure 1 A illustrates an exemplary CTU partitioning into coding units and transform units according to an HEVC video compression standard,

Figure 1 B illustrates an exemplary CTU partitioning according to the quad-tree and binary tree based method (QTBT),

Figure 1 C illustrates an exemplary tree representation of a CTU partitioning according to the quad-tree and binary tree based method (QTBT),

Figure 2 illustrates a block diagram of an exemplary encoder according to an embodiment of the present principles,

Figure 3 illustrates example of partitioning of a CU into sub-CUs according to the present principles,

Figure 4 illustrates a diagram of a butterfly implementation of a one-dimensional transform with size equal to 3, according to an embodiment of the present principles, Figure 5 illustrates another diagram of a butterfly implementation of a one-dimensional transform with size equal to 3, according to an embodiment of the present principles, Figure 6 illustrates a diagram of a butterfly implementation of a one-dimensional inverse transform with size equal to 3, according to an embodiment of the present principles, Figure 7 illustrates a diagram of a butterfly implementation of a one-dimensional complementary transform with size equal to 3 used for performing a one-dimensional fast transform with size equal to 6, according to an embodiment of the present principles, Figure 8 illustrates a diagram of a butterfly implementation of a one-dimensional transform with size equal to 6, according to an embodiment of the present principles,

Figure 9 illustrates the relationships between lines and columns of Xi 2 exploited in the fast implementation of Xi 2 .

Figure 10 illustrates a diagram of a one-dimensional transform implementation with size equal to 12 according to an embodiment of the present principles,

Figure 1 1 illustrates a diagram of a butterfly implementation of a one-dimensional complementary transform with size equal to 6 used for performing a one-dimensional fast transform with size equal to 12, according to an embodiment of the present principles, Figure 12 illustrates a diagram of a one-dimensional transform implementation with size equal to 24 according to an embodiment of the present principles,

Figure 13 illustrates a diagram of a one-dimensional transform implementation with size equal to N according to an embodiment of the present principles,

Figure 14 illustrates a block diagram of an exemplary decoder according to an embodiment of the present principles,

Figure 15 illustrates a flow diagram of an exemplary method for encoding a video according to an embodiment of the present principles,

Figure 16 illustrates a flow diagram of an exemplary method for decoding a video according to an embodiment of the present principles,

Figure 17 illustrates an exemplary encoder that may be used in one embodiment of the present principles,

Figure 18 illustrates an exemplary decoder that may be used in one embodiment of the present principles.

5. Description of embodiments

It is to be understood that the figures and descriptions have been simplified to illustrate elements that are relevant for a clear understanding of the present principles, while eliminating, for purposes of clarity, many other elements found in typical encoding and/or decoding devices. It will be understood that, although the terms first and second may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. Various methods are described above, and each of the methods comprises one or more steps or actions for achieving the described method. Unless a specific order of steps or actions is required for proper operation of the method, the order and/or use of specific steps and/or actions may be modified or combined. A picture is an array of luma samples in monochrome format or an array of luma samples and two corresponding arrays of chroma samples in 4:2:0, 4:2:2, and 4:4:4 colour format. Generally, a "block" addresses a specific area in a sample array (e.g., luma Y), and a "unit" includes the collocated block of all encoded color components (Y, Cb, Cr, or monochrome). However, the term "block" is used herein to refer to a block (e.g. a CB, a CTB) or a unit (e.g. a CU, a CTU).

In the following sections, the word "reconstructed" and "decoded" can be used interchangeably. Usually but not necessarily "reconstructed" is used on the encoder side while "decoded" is used on the decoder side.

Figure 2 illustrates a block diagram of an exemplary encoder according to an embodiment of the present principles. The video encoder 20 disclosed here below may be conforming to any video or still picture encoding schemes. The encoding and decoding processes described below are for illustration purposes. According to some embodiments, encoding or decoding modules may be added, or removed or may vary from the following modules. However, the principle disclosed herein could still be applied to these variants.

Classically, the video encoder 20 may include several modules for block-based video encoding, as illustrated in figure 2. A picture I to be encoded is input to the encoder 20. The picture I is first subdivided into a set of blocks by a subdividing module. Each block BLK of the picture I is then processed for encoding. A block BLK may have size ranging from 4x4 to 128x128 pixels. Usually but not necessarily, the size of a block BLK is a power of 2.

The encoder 20 performs encoding of each block BLK of the picture I as follows. The encoder 20 comprises a mode selection unit for selecting a coding mode for a block BLK of the picture to be coded, e.g. based on a rate/distorsion optimization. Such a mode selection unit comprising:

- a motion estimation module for estimating motion between one current block of the picture to be coded and reference pictures,

- a motion compensation module for predicting the current block using the estimated motion,

- an intra prediction module for spatially predicting the current block.

The mode selection unit may also decide whether splitting of the block is needed according to rate/distorsion optimization for instance. In that case, the mode selection unit then operates on each subblock of the block BLK. Each subblock of the block BLK may also be further split into subblocks. Once a coding mode is selected for the current block BLK or coding modes for subblocks of the current block BLK are selected, the mode selection unit delivers a predicted block PRED and corresponding syntax elements to be coded in the bitstream for performing the same block prediction at a decoder. When the current block BLK has been split, the predicted block PRED is formed by the set of predicted subblocks delivered by the mode selection unit for each subblocks.

A residual block RES is then obtained by substracting the predicted block PRED from the original block BLK.

The residual block RES is then transformed by a transform processing module delivering a transform block TCOEF of transformed coefficients. The transform block TCOEF is then quantized by a quantization module delivering a quantized transform block QCOEF of quantized residual transform coefficients.

The syntax elements and quantized residual transform coefficients of the block QCOEF are then input to an entropy coding module to deliver coded data to form the coded bitstream STR.

The quantized residual transform coefficients of the quantized transform block QCOEF are processed by an inverse quantization module delivering a block TCOEF' of unquantized transform coefficients. The block TCOEF' is passed to an inverse transform module for reconstructing a block of residual prediction RES'.

A reconstructed version REC of the block BLK is then obtained by adding the prediction block PRED to the reconstructed residual prediction block RES'.

The reconstructed block REC is stored in memory for use by a picture reconstruction module. The picture reconstruction module performs reconstruction of a decoded version Γ of the picture I from the reconstructed blocks REC. The reconstructed picture Γ is then stored in a reference picture memory for later use as a reference picture for encoding the following pictures of the set of pictures to code or for encoding subsequent blocks of the picture I.

According to an embodiment of the present principles, when determining a coding mode for coding a block BLK, the block BLK or subblocks of the block BLK may be asymmetrically split as illustrated by figure 3. Such splittings result in blocks having rectangular shapes. These shapes consist in sizes equal to 3x2 N in width and/or height. Furthermore, a block or subblock having a size multiple of 3 in width or height can be further split in a binary fashion, i.e. horizontally or vertically. As a consequence, a square block of size (w, h , where w is the width of the block, and h is its height, that is split through one of the asymmetric binary splitting modes would lead for example to 2 subblocks with respective rectangular sizes (w, ^ and ( w < )- According to this embodiment, blocks or subblocks having a width and/or height equal to 3x2 N , may then be determined by the coding mode selection unit and used at the encoder. In such a case, Intra prediction or Inter prediction process is performed on such rectangular blocks or subblocks having of a size multiple of 3.

According to the present principle, the transform processing module is configured to operate on such rectangular shapes by applying a 2D transform with size 3 2 n in width or height. Such process does not exist in known video coding standards because only square transforms are allowed. According to the present principles, the transform processing module is thus configured to operate on a block having a same shape and size as the shape and size of the prediction block used for predicting the block. Therefore, no more partitioning into transform unit is needed.

The present principle allows providing a fast implementation of the 2D transform to apply on blocks having a size multiple of 3 in at least one dimension (width, height) is disclosed below. According to the present principle, a fast implementation of the 2D inverse transform to apply on blocks having a size multiple of 3 in at least one dimension (width, height) is also disclosed below. The inverse transform module disclosed above is configured to apply such a fast 2D transform to the blocks having a size multiple of 3 in at least one dimension.

The 2D transform applied on a block in standard video codec is a 2D DCT like transform. The 2D DCT applied on a block in standard video codec involves the separable application of two 1 D transforms onto the considered 2D block, in horizontal and vertical directions.

If we defined the following matrix A N for a given size N, as follows:

Then the 2D separable DCT of an input square block X with size N x N can be written as follows:

DCT(X) = A X A N , with A being the transposed version of the matrix A N .

Thus it consists in applying the one-dimensional DCT transform successively on each line and each column of the input two-dimensional block.

The one-dimensional DCT transform of a one-dimensional vector X N e N is given by:

DCT 1D (X N ) N

The straightforward implementation of this 1 D DCT transform under the form of the multiplication of a matrix by a vector involves N multiplications and N— 1 additions, which is a significant amount of operations when the input vector is of large size such as for example 32, 64, 128, 256. To limit the complexity of integer DCT transform implementation, it is advantageous to design a fast implementation of such transform. A fast implementation of the 1 D- DCT transform is disclosed for block sizes equal to 3 2 n , n≥ 0, i.e. for blocks having a size multiple of 3 in at least one dimension (width, height).

The DCT matrix for a vector size equal to 3 is given by:

A 3 (l,l) Α 3 (2,ϊ) A 3 3,l)

So, A 3 = A 3 (l,2) A 3 (2,2) A 3 (3,2) can be re-written as:

A 3 (l,3) _4 3 (2,3) A 3 (3,3)

where c¾ represents the value of cos(-).

Butterfly implementations for matrix transform and inverse matrix transform with size 3

Therefore, a butterfly implementation of the one-dimension DCT with size 3 is shown on figure 4.

On figure 4, the graph nodes on the left correspond to input samples, and the nodes on the right are the resulting transform DCT coefficients. The values associated with each edge represent a multiplicative factor, which are called edge factors. Moreover, edges that arrive at a same right side node are summed together. Where the same multiplicative factor is applied on some edges that go to same right-side node, then the addition is done before the multiplication by the edge factor.

An equivalent, slightly less compact view of the same butterfly implementation of figure 4 is illustrated on figure 5.

Therefore, a fast implementation of the 1 D DCT with size 3 is as follows:

· E l) = x 1 + x 3

• E 2) = x 1 - x 3

• t 1 = E l) + x 2 ) x A 3 (l,l)

• t 2 = E(2) A 3 (2,l)

• t 3 = E(l) x A 3 (3,1) - x 2 , where E(1 ) and E(2) represent intermediate nodes on the butterfly diagram shown on figure 5. Such a butterfly implementation involves 4 additions and 3 multiplications, while a classical matrix-based DCT implementation involves 6 additions and 9 multiplications.

Below, a butterfly diagram for computing the inverse transform 1 D DCT with size equal to 3 is shown. The DCT matrix is orthogonal, which implies that the inverse transform matrix S3 can be computed from the transform matrix A 3 as follows:

A butterfly implementation for S3 is shown on figure 6. On figure 6, terms of matrix A are referred to while 4 3 (l,l) = S 3 (l,l), A 3 (2,l) = S 3 (l,2) and 4 3 (3,1) = S 3 (l,3).

Therefore, a fast implementation of the 1 D inverse DCT with size 3 is as follows:

• F(l) = i 1 x5 3 (l,l)

• £(2) = t 2 xS 3 (1,2)

• £(3) = t 3 xS 3 (1,3)

• x 1 =£(l)+£(2) + £(3)

• x 2 = Ε(ϊ) - 2 X £(3) = Ε(ϊ) - t 3

• x 3 = £(1) - £(2) + £(3)

Such a butterfly implementation involves 5 additions and 3 multiplications, instead of 6 additions and 9 multiplications for the classical matrix-based implementation.

Butterfly implementations for matrix transform and inverse matrix transform with size 6

Below, butterfly implementations equivalent to matrix transform and inverse matrix transform with a size equal to 6 are disclosed.

The matrix transform corresponding to the 1 D DCT with size 6 based on the matrix AN is as

A 6 can also be written as follows:

It appears from A 6 that the odd lines of A 6 comprises the coefficients of A 3 discussed above. Therefore, a matrix Pi can be written as follows by permutating lines of A 6 :

In the matrix P^e), the first 3 lines correspond to odd lines of A 6 and the last 3 lines correspond to even lines of A 6 .

Pi (A 6 ) can thus be re-written using A 3 and a complementary matrix transform X 6 , as: where A 3 represents a vertically flipped version of the matrix A 3 , and X 6 is defined as follows -4 6 (2,1) A b {2,2) A 6 (2,3)

X 6 = 6 (4,1) 6 (4,2) 6 (4,3) ] =

6(6,l) _4 6 (6,2) A 6 (6,3)

Therefore, the 1 D DCT of size 6 applied to a 1 D sampl of size 6 with can be expressed as follows:

with .

with Y 2 corresponding to a horizontally flipped version of Y 2 .

Thus, it appears that computing the 1 D DCT of size 6 can be performed by performing the computation of the fast 1 D DCT with size 3 disclosed above applied to a linear combination of samples of the input vector X and by performing a fast computation of a product of the matrix X 6 by a 3 x 1 vector comprising linear combination of samples of the input vector X. The application of 1 D-DCT with size 3 provides odd lines of the final transform vector, i.e. h h

Below is disclosed a fast implementation of such multiplication of the matrix X 6 by a 3 x 1 vector - A way of implementing the product of X 6 by a vector V=[v t v 2 v 3 Y is:

Ul = X 6 (l,l) * V! + X 6 (2,l) * v 2 + X 6 (3,l) * v 3

• u 2 = (v 1 - v 2 - v 3 ) * X 6 (2,l)

• u 3 = ¾(3,1) * v 1 - ¾(2,1) * v 2 + X 6 (l,l) * v 3 where [ι½ u 2 ιι 3 Υ is the destination vector.

Such an implementation leads to 7 multiplications and 6 additions.

It can be noted that:

Therefore, the following relationship between the values of the cosinus function can be define:

(cos (^) a + cos b) - (cos (¾) « + cos (^) b) = cos Q a - COS Q b

Such a relationship can be advantageously exploited for computing the product:

X 6 [v 1 v 2 v 3 Y as disclosed by the butterfly diagram shown on the left part of figure 7. Such a butterfly diagram is designed to implement the following computational steps:

0(1) = C n_ V t + Csn v = X 6 l,l) v t + X 6 (3,l) v 3 ,

0(Ζ) = επ · ν 2 = Χ 6 (2,1) · ν 2 ,

0(3) = csn v 1 + c IL - v 3 = ¾(3,1) v 1 + X 6 l,l) v 3

12 12

The fast implementation of X 6 disclosed above can be advantageously used in the computation of the second part of the transform matrix A 6 ( X 6 Y - X 6 Y 2 ), as follows: • 0(1) = C n_ v + Csn v 3 = X 6 l,l) v 1 + X 6 (3,l) v 3

• 0(Z) = c* - v 2 = X 6 (2,l) - v 2

• 0(3) = Csn v 1 + C n_ v = X 6 (3,l) v 1 + X 6 l,l) v 3

• ! = 0(1) + 0(2)

· u 2 = 0(1) - 0(2) - 0(3)

• u 3 = -0(2) + 0(3) where [v 1 v 2 v 3 f corresponds to linear combinations of the input vector X with

V l = X l ~ X 6

V2 = X 2 ~ X 5

v 3 = x 3 — x 4 , with the input vector X =[x^ x x 3 x 4 X5 x 6 ] and u 2 u 3 ] is the destination vector. The butterfly steps disclosed above generate even lines of the transformed vector:

[t 2 t 4 t 6 ] = [ i u 2 u 3 ]

This fast butterfly version of the transform matrix X 6 leads to 5 multiplications and 6 additions, instead of 9 multiplications and 6 additions for the straightforward matrix product.

The overall butterfly design for the one-dimension DCT with size 6 is shown on figure 8.

On figure 8, it appears that the fast implementation of the transform A 6 can be performed using the fast implementation of the transform for size 3 (A 3 ) applied to linear combinations of the input signal (xi , x 2 , X3, x 4 , xs, xe) 1 to obtain odd lines (ti , t 3 , t 5 ) of the transformed vector and the fast implementation of the complementary matrix X 6 applied to linear combinations of the input signal (xi , x 2 , X3, x 4 , xs, xe) 1 to obtain even lines (t 2 , t 4 , t 6 ) of the transformed signal.

A similar reasoning can be applied for computing the inverse transform S 6 for size 6. As A 6 orthogonal: A 6 _1 = A^, so:

s„ = A f r 1 = I-

From S 6 , it appears that the odd columns of S 6 comprises the coefficients of A 3

permutating columns in the matrix S 6 , we obtain:

P C (S 6 ) comprises the matrix . 3 i on the top-left 3x3 sub-matrix and the matrix X 6 in the 3x3 top-right sub-matrix. Thus:

where S 3 represents the matrix S 3 in a horizontally flipped version and -X 6 is the opposite of the horizontally flipped version of X 6 . Therefore:

J - YI -X6 - Y 2 ) s 3 - YI -X6 - Y 2 )

Thus, the fast implementation of the inverse DCT with size 6 can simply re-use the fast implementation of the inverse DCT with size 3 and the fast implementation of the product by matrix x 6 disclosed above.

Once these sub-processes are performed, two resulting sub-vectors are obtained: the inverse transform for size 6 is obtained by gathering the sub-vectors as follows:

P C DCT- X)) = a + x ) {a! 2 + x 2 ') a 3 ' + x 3 ') a - x ) {a! 2 - x 2 ') a 3 ' - x 3 ')f

Therefore, a fast implementation of the 1 D inverse DCT with size 6, applied on a vector (ti, t 2 , t 3 , t 4 , t 5 , te) 1 in the transform domain , is as follows :

£(l) = t 1 x5 6 (l,l)

• £(2) = t 2 x5 6 (l,3)

• £(3) = t 3 x5 6 (l,5)

• a = E(X) + E(Z) +E 3

• a' 3 =E(X)-E 2 + E(3)

• 0(1) =X 6 l,l)-t 4 + X 6 (3,l)-t 6

• 0(2)=X 6 (2,l)-t 5 • 0(3) = ¾(3,l) - t 4 + ¾(U) ' t 6

• x = 0(1) + 0(2)

• x' 2 = 0(1) - 0(2) - 0(3)

• x = -0(2) + 0(3)

Butterfly implementations for matrix transform and inverse matrix transform with size 12 Below, butterfly implementations for matrix transform and inverse matrix transform with a size equal to 12 are disclosed.

The 1 D DCT as applied on a 12x1 vector is obtained through the matrix:

By grouping odd lines on one side and even lines on the other side, and by permutating lines of the matrix Ai 2 , one obtains:

where the complementary matrix X 12 is defined by:

In other words, X 12 is the matrix of cosine values applied on the values contained in the following matrix:

Such a matrix can be simplified as:

if the cosine of a matrix M = (ran ) r , is defined as the matrix cos( ) =

To implement a fast version of the product X 12 x [^ν 2 ·· · ν·^]*, we exploit the following relationships of the cosine function : cos { ) = cos g) (cos (g) + cos (g))

cos (g) = cos g) (cos + cos (£)) cos (g) = cos g) (cos (£) - cos

c os = cos G) ( cos (S) - cos (£))

cos (g) = cos g) (cos (g) - COS (g))

cos (g) = cos g) (cos (g) + COS (g))

Therefore, these properties are used to establish relationships between the lines and columns of X 12 , as illustrated on figure 9. From the above equations, one can deduce how some lines of matrix X 12 can be expressed as a linear combination of other lines of Xi 2 . On figure 9, the signs on the top of the matrix indicate if the linear combination is multiplied by -1 or not to obtain the destination value.

The relationships between the lines of Xi 2 illustrated on figure 9 are further disclosed below:

• vi e {1,4,5}, x 12 (l, Q = (Xi2 (3, + Xi2 (4, )

• Vi e {2,3,6}, X 12 (1, 0 = - ^ ( i2 (3, 0 + i2 (4, 0)

• Vi e {1,4,5}, X 12 (6, i) = (X 12 (3, i) - X 12 (4, i))

• Vi e {2,3,6}, X 12 (6, i) = - (X 12 (3, i) - X 12 (4, i))

• Vi e {1,4,5}, X 12 (2, i) = (1 + V2)Z 12 (5, i)

• Vi e {2,3,6}, X 12 (5, i) = -(1 + V2)X 12 (2,0

Therefore, a fast implementation of the product of 12 by a 6x1 vector 7 = [v 1 v 2 v 3 v 4 v s v 6 ] t comprises the following operations:

· 0(5) =X 12 (2,2)- (v 2 -v 3 -v 6 ) =A 12 (4,2)-(v 2 -v 3 -v 6 )

• 0(6) = X 12 2,2 (i¾ -v 4 - v 5 ) = A 12 (A,2) - ¾ - ν 5 )

• u 3 = 0(l) + 0(2)

• u 4 = 0(3) + 0(4)

• 00(1) = 0(1) + 0(3)

· 00(2) = 0(2) + 0(4)

• 00(3) = 0(1) - 0(3)

• 00(4) = 0(2) - 0(4)

Ul =^(00(1) -00(2))

• u 6 =^(00(3) -00(4))

· u 2 = 0(5) + (l + V2)0(6)

• u 5 = 0(6)-(l + V2)0(5)

A compact view of such computations is provided under the form of a butterfly diagram and illustrated on figure 10.

Therefore, a fast implementation of the transform A12, as illustrated on figure 11, can be performed using a fast implementation of the transform for size 6 (A 6 ) applied to linear combinations of the input signal (xi, X2, X3, x 4 , xs, xe, x 7 , xs, xg, xio, xn, x^) 1 to obtain odd lines (ti, t 3 , t 5 , t 7 , tg, tn ) of the transformed vector and the fast implementation of the complementary matrix X12 disclosed above applied to linear combinations of the input signal (xi, x 2 , X3, x 4 , xs, x 6 , x 7 , xs, xg, X10, X11, X12) 1 to obtain even lines (t 2 , t 4 , t 6 ,ts, tio, ti 2 ) of the transformed signal, with t 2 , t 6 , tio,t 4 , ts, ti2 corresponding respectively to the output signal (ui, u 2 , u 3 , u 4 , u 5 , u 6 ) product of Xi 2 disclosed above.

In the same way as for the inverse DCT with size 6, the inverse DCT for size 12 is obtained with the following matrix:

$12 = ^12 1 = ^12*

In the same way as disclosed above, it can be shown that: where P C (S 12 ) represents a permutation of the columns of matrix S 12 , basically grouping odd columns on one side, and even columns on the other side. Thus the implementation of a fast inverse DCT with size 12 can be determined recursively by re-using the fast implementation of the inverse DCT with size 6 and the previously described multiplication by matrix X 12 . Once these sub-processes at size 6 are done the inverse transform at size 12 is obtained by combining the resulting sub-results with size 6 in the same way as already presented for the butterfly implementation of the inverse transform with size 6:

P^DCT-^X)) = P C (S 12 X X) = [ a + x ) ... {a 6 ' + x 6 ') a - x ) ... {a 6 ' - x 6 ')Y

Where:

(a ... a 6 ' = j2/6.A' 6 . t 1 , t 3 , t 5 , t 7 , t 9 , t ii y

(x ... x 6 ' = J2/6. 12 .. (t 2 , t 4 , t 6 , t 8 , t 10 , ΐ 12 )*

Butterfly implementations for matrix transform and inverse matrix transform with size 12

Below the fast implementation of the DCT transform with size 24, according to the present invention is disclosed. A compact view of such computations is illustrated on figure 12.

The butterfly version of DCT with size 24 is constructed in a similar way as for the size 12. First, if the DCT matrix for size 24 is noted: l (c(/ ) - cos ( (27 + 1)/C7r

feje[0,23]

Then it can be shown that:

where the 12x12 matrix Χ is defined as:

The matrix X 24 can be written as X 24 = COS if we define the cosine of a matrix M = (ma)

l ]J i,je r [l,n η ] as the matrix cos( ) = (cosim;,)) r η .

^ l J J J i,je[l,n\

Linear relationship that exist between lines and columns in the matrix X 24 are identified as follows:

• C0S =C0S © ' ( C0S © + C0S (¾)

• cos O =cos ©-( cos (S) +cos (¾)

• cos (S) =cos ©-( cos O +cos (¾)

• cos (¾) =cos © ( cos (S) - cos (¾)

• cos © =cos ©-( cos +cos (¾)

• cos O =cos ©-( cos (S) +cos (¾)

The following linear dependencies between lines and columns are advantageously used to perform the DCT with size 24 as follows:

• 7 + = {1,4,5,8,9,12}

• /- = {2,3,6,7,10,11}

• L idx = {1,2,3,10,11,12}

• ; EL idx , l pos (j) =∑ ieI+ X 12 (j, i) v t Vy e L idx , l neg (/) =∑ ie/ - X 12 (j, 0

~ (.{lpos(3) + 'pos(lO)) ( neg(3) + 'ne,g(10)))

^ (('pos(2) + 'pos(ll)) ( neg(2) + 'ne,g(ll)))

(('pos(l) + 'pos(l2)) ( neg(l) + 'ne,g(l2)))

l_final(7) = - = ((l_pos(l) - Z_pos(12)) - (Z_ne#(l) - Z_ne#(12))) l_final(8) -

lfinal(9) = ^ (('pos(3) 'pos(lO)) i}-neg{3) ~ 'ne^ClO)))

V e ^idx < '/ίπαί Ο) — lpos (j + ^neg U

A similar construction as for the inverse DCT with size 12 can be used for determining the inverse DCT transform with size 24. It can thus be shown that:

Thus, the fast inverse DCT transform with size 24 can be designed in a recursive fashion by re-using the inverse DCT butterfly method for size 12 and the method provided above for the X 2 4 matrix operator.

General butterfly implementation

The principle has been disclosed above for fast DCT transform with size equals to 3, 6, 12 and 24. In a more general case, a fast integer DCT transform implementation for block sizes equal to 3 2 n , n≥ 0, i.e. for blocks having a size multiple of 3 in at least one dimension (width, height), can be designed in a recursive way, as illustrated on figure 13.

Such fast integer DCT transform implementation for blocks having a size N multiple of 3 in at least one dimension can be designed in a general case, for N > 3 , using a butterfly implementation of the transform matrix A N . Such a butterfly implementation illustrated on figure 13 is based on a matrix P ; (A N ) wherein the N/2 first lines of P ; (A N ) corresponds to odd lines of A N and N/2 last lines of P ; (A N ) corresponds to even lines of A N .

Furthermore, the matrix P ; (A N ) can be represented by:

P l (A N ) = , where A N/2 represents a vertically flipped version

of the matrix A N/2 , and -X N represents the opposite of the vertically flipped version of a complementary matrix transform X N . Thus, it is possible to re-use the butterfly implementation designed for the matrix A N/2 by applying the fast implementation of the transform for size N/2 (AN/2) applied to linear combinations of the input signal (χ,) 1 ί=ι ,N-I to obtain odd lines of the transformed vector.

The complementary matrix transform X N is represented by:

Λ Ν — COS 2n )k,je[0,N/2] ■

Thanks to the properties of the cosine function, dependencies between lines and columns of the matrix X N can be determined and advantageously used for designing a fast implementation of XN to be applied to linear combinations of the input signal (Χ ^-Ι ,Ν-Ι to obtain odd lines of the transformed signal.

These linear dependencies between lines and columns of X N result from the following generic relationship:

(2; + 1)ΤΓ (2/c + l)(2y + 1)π

= COS I - h

4 2N

(2; + 1)ττ (2/c + l)(2y + 1)ττ (2fc + l)(2; + l)7r

= COS cos + sin

4 2N 2iV

(2; + 1)ττ (2/c + l)(2y + 1)ττ (JV - (2/c + l)(2y + 1))TT

= cos cos + cos

2N 2~/V

Since N is even and (2/c + l)(2y + 1) is odd, (N - (2/c + l)(2y + 1)) is also odd, thus the term cos ^ jV~(2fc+ ^ 2 -' '+1)7T ^ corresponds to a member of matrix X N located on a line different from (k + ^ .

Figure 14 illustrates a block diagram of an exemplary decoder according to an embodiment of the present principles. A bitstream representative of a coded image or video comprises coded data representative of at least one block of said image or video, wherein said block has been coded according to an embodiment of the present principles.

The coded data is passed to the video decoding modules of the video decoder 30. As illustrated in figure 14, coded data is passed to an entropy decoding module that performs entropy decoding and delivers quantized coefficients QCOEF' to an inverse quantization module and syntax elements to a prediction module.

The quantized coefficients QCOEF' are inverse quantized by the inverse quantization module and inverse transformed by an inverse transform module delivering residual blocks data RES'. A block to be reconstructed may have been coded with a size equal to 3x2 N in at least one dimension. According to the present principle, the inverse transform module is configured to operate on such blocks by applying a 2D transform with size 3 2 n in width or height. The inverse transform module is thus configured to implement one of the fast inverse 1 D DCT transform as disclosed above according to the size of the block.

The prediction module builds prediction blocks PRED according to the syntax element and using a motion compensation module if a current block has been inter-predicted or an intra prediction module if the current block has been spatially predicted.

A reconstructed picture Γ is obtained by adding prediction blocks PRED and residual blocks RES'. The reconstructed picture Γ is stored in a reference frame memory for later use as reference frame. The reconstructed picture Γ is then outputted by the video decoder 30. The decoder 30 may be implemented as hardware or software or a combination of hardware and software thereof.

Figure 15 illustrates a flow diagram of an exemplary method for encoding a video according to an embodiment of the present principles. According to this embodiment, at least one block BLK of a picture of the video has a size N which is not a power of 2 along at least one dimension.

According to a particular embodiment, N is a multiple of 3 and can be written as 3 2 n , n≥ 0. In step 40, a predicted block is determined for the current block BLK. The predicted block can be determined according to classical block prediction method (intra or inter prediction).

According to the embodiment disclosed herein, the predicted block size's is equal to the size of the block BLK.

In step 41 , a residual block is obtained by computing a difference between the current block BLK and the predicted block. The residual block, thus, has a size N along at least one dimension.

In step 42, block transform of the residual block is performed. The block transform is performed by applying a 2D separable DCT transform, i.e. by applying a 1 D DCT transform on the lines of the residual block, and then 1 D DCT transform on the columns of the residual block. If the lines, respectively columns, of the residual block have a size equal to 3 2 n , n > 0 , a fast 1 D DCT transform implementation as disclosed above is used. Otherwise, if the lines, respectively columns, of the residual block have a size equal to a power of 2, known fast 1 D DCT transform implementations are used.

In step 43, the transformed residual block is then quantized and entropy coded.

Figure 16 illustrates a flow diagram of an exemplary method for decoding a video according to an embodiment of the present principles. According to this embodiment, at least one block BLK of a picture of the video has a size N which is not a power of 2 along at least one dimension.

According to a particular embodiment, N is a multiple of 3 and can be written as 3 2 n , n≥ 0. The current block BLK is reconstructed as follows.

In step 50, a transformed residual block is entropy decoded from a bitstream and inverse quantized. The transformed residual block size's is equal to the size of the current block BLK and comprises decoded data for the current block BLK to reconstruct.

In step 51 , inverse block transform is performed on the transformed residual block. The inverse block transform is performed by applying a 2D separable inverse DCT transform, i.e. by applying a 1 D inverse DCT transform on the lines of the transformed residual block, and then a 1 D inverse DCT transform on the columns of the transformed residual block. If the lines, respectively columns, of the transformed residual block have a size equal to 3 2 n , n≥ 0, a fast 1 D inverse DCT transform implementation as disclosed above is used. Otherwise, if the lines, respectively columns, of the transformed residual block have a size equal to a power of 2, known fast 1 D inverse DCT transform implementations are used.

Inverse block transform delivers a residual block with a size equals to the size of the transformed residual block.

In step 52, a predicted block is determined for the current block BLK to reconstruct. The predicted block can be determined according to classical block prediction method (intra ou inter prediction). According to the embodiment disclosed herein, the predicted block has a same size as the current block BLK.

In step 53, the current block BLK is reconstructed by adding the predicted block to the residual block.

Figure 17 illustrates an exemplary encoder that may be used in one embodiment of the present principles. Such an apparatus for encoding a video is configured to implement the method for encoding a video according to the present principles. The encoder apparatus of figure 17 may be as an example the encoder 20 as described in figure 2.

In the example shown in figure 17, the encoder apparatus comprises a processing unit PROC equipped for example with a processor and driven by a computer program PG stored in a memory MEM and implementing the method for encoding a video according to the present principles.

At initialization, the code instructions of the computer program PG are for example loaded into a RAM (not shown) and then executed by the processor of the processing unit PROC. The processor of the processing unit PROC implements the steps of the method for encoding a video which has been described here above, according to the instructions of the computer program PG.

Optionally, the encoder apparatus 20 comprises a communications unit COM to transmit an encoded bitstream to a decoder.

The encoder apparatus 20 also comprises an interface for receiving a picture to be coded, or a video.

Figure 18 illustrates an exemplary decoder that may be used in one embodiment of the present principles. Such an apparatus for decoding a video is configured to implement the method for decoding a video according to the present principles. The decoder apparatus of figure 18 may be as an example the decoder 30 as described in figure 14.

In the example shown in figure 18, the decoder apparatus comprises a processing unit PROC equipped for example with a processor and driven by a computer program PG stored in a memory MEM and implementing the method for decoding a video according to the present principles.

At initialization, the code instructions of the computer program PG are for example loaded into a RAM (not shown) and then executed by the processor of the processing unit PROC. The processor of the processing unit PROC implements the steps of the method for decoding a video which has been described here above, according to the instructions of the computer program PG.

Optionally, the decoder apparatus 30 comprises a communications unit COM to receive an encoded bitstream from an encoder.

The decoder apparatus 30 also comprises an interface for displaying a reconstructed picture or a reconstructed video.