Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DECODING OF MULTI-KERNEL POLAR CODES
Document Type and Number:
WIPO Patent Application WO/2019/007495
Kind Code:
A1
Abstract:
Provided is a procedure for sequentially decoding a polar code. The procedure comprises propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes, determining first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage, propagating the first decoded bit values through a subset of the multiple decoding stages and storing first partial sums determined from the propagated first decoded bit values in first memory elements of a memory. The procedure is continued by determining second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values, and propagating the second decoded bit values through a subset of the multiple decoding stages and storing second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements.

Inventors:
BIOGLIO VALERIO (DE)
LAND INGMAR (DE)
GABRY FREDERIC (DE)
Application Number:
PCT/EP2017/066747
Publication Date:
January 10, 2019
Filing Date:
July 05, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
BIOGLIO VALERIO (DE)
International Classes:
H03M13/13
Other References:
FREDERIC GABRY ET AL: "Multi-Kernel Construction of Polar Codes", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 19 December 2016 (2016-12-19), XP080745345
LE GAL BERTRAND ET AL: "Software polar decoder on an embedded processor", 2014 IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS (SIPS), IEEE, 20 October 2014 (2014-10-20), pages 1 - 6, XP032709393, DOI: 10.1109/SIPS.2014.6986083
CAMILLE LEROUX ET AL: "Hardware architectures for Successive Cancellation Decoding of Polar Codes", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 12 November 2010 (2010-11-12), XP080462225, DOI: 10.1109/ICASSP.2011.5946819
ARIKAN: "Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels,", IEEE TRANSACTIONS ON INFORMATION THEORY, July 2009 (2009-07-01)
GABRY ET AL.: "Multi-Kernel Construction of Polar Codes", ARXIV: 1612.06099, December 2016 (2016-12-01)
GABRY ET AL.: "Multi-Kernel Construction of Polar Codes", ARXIV:1612.06099, December 2016 (2016-12-01)
Attorney, Agent or Firm:
KREUZ, Georg (DE)
Download PDF:
Claims:
CLAIMS

1. A decoder for decoding a multi-kernel polar code, the decoder being configured to sequentially determine decoded bit values, wherein the decoder is further configured to: propagate statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes, a kernel unit being configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage; determine first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage; propagate the first decoded bit values through a subset of the multiple decoding stages and store first partial sums determined from the propagated first decoded bit values in first memory elements of a memory of the decoder; determine second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values; and propagate the second decoded bit values through a subset of the multiple decoding stages and store second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums are to consume memory space gained by releasing the first memory elements.

2. The decoder of claim 1, wherein the decoder is configured to: store output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages in memory elements of the memory; and replace a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.

3. The decoder of claim 1 or 2, wherein a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding stage to the succeeding stage.

4. The decoder of claim 1 to 3, wherein different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values.

5. The decoder of any one of claims 1 to 4, wherein the number of input statistical values of a kernel unit is two, three, five, or more.

6. The decoder of any one of claims 1 to 5, wherein a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage.

7. The decoder of any one of claims 1 to 6, wherein the decoder is configured to sequentially overwrite the first partial sums with the second partial sums.

8. A method of sequentially decoding a polar code, the method comprising: propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes, a kernel unit being configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage; determining first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage; propagating the first decoded bit values through a subset of the multiple decoding stages and storing first partial sums determined from the propagated first decoded bit values in first memory elements of a memory of a decoder; determining second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values; and propagating the second decoded bit values through a subset of the multiple decoding stages and storing second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements.

9. The method of claim 8, comprising: storing output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages in memory elements of the memory; and replacing a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.

10. The method of claim 8 or 9, wherein a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding stage to the succeeding stage.

11. The method of any one of claims 8 to 10, wherein different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values.

12. The method of any one of claims 8 to 11, wherein the number of input statistical values of a kernel unit is two, three, five, or more.

13. The method of any one of claims 8 to 12, wherein a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage.

14. The method of any one of claims 8 to 13, comprising: sequentially overwriting the first partial sums with the second partial sums.

15. The decoder/method of any one of the preceding claims wherein the statistical values are one of:

log-likelihood ratios;

likelihood ratios; or

likelihoods.

Description:
DECODING OF MULTI-KERNEL POLAR CODES

FIELD

The present disclosure relates to multi-kernel polar codes. In particular, the present disclosure relates to a procedure for decoding multi-kernel polar codes.

BACKGROUND

Polar codes, described by Arikan in "Channel polarization: a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels," IEEE TRANSACTIONS ON INFORMATION THEORY, July 2009, define a new class of forward which are based on the polarization effect of the Kronecker products (g) which is called the "kernel" of the polar code. The polarization

effect causes that bits at some positions of an encoding vector are more reliably decodable (after encoding and transmitting) than others.

The polarization effect can be exploited by using the more reliable positions for carrying information bits, while "freezing" the bits at less reliable positions, such that the decoder is aware of the values of the "frozen bits" when decoding. Given an encoding vector u of length N comprising K information bits and N— K "frozen bits", a codeword x of length N may be calculated as x = u G N , where G N = T® n is the transformation matrix of the polar code. Thus, while the number of information bits K can be freely chosen, possible code lengths are restricted to N = 2 n .

To mitigate the code length problem, multi-kernel polar codes have been proposed which are constructed based on kernels of different sizes (cf. Gabry et al. in "Multi-Kernel Construction of Polar Codes," arXiv: 1612.06099, Dec. 2016), as the polarization effect remains valid for transformation matrices of the type G N = T n (8) ■■■ <8) T n (where s is the number of kernels), which enables codes of length N = n · ... · n s . However, decoding multi-kernel polar codes using a recursive successive cancellation (SC) algorithm, as described by Gabry et al. in "Multi-Kernel Construction of Polar Codes," arXiv: 1612.06099, Dec. 2016, may require large amounts of memory. SUMMARY

In the following, an improved procedure for sequentially decoding multi-kernel polar codes is described which allows reducing the memory requirement from 0 (NlogN) to 0 (N) .

According to a first aspect of the present invention, there is provided a decoder for decoding a multi-kernel polar code. The decoder is configured to sequentially determine decoded bit values by propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes. Each of the multiple kernel units is configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage.

The decoder is further configured to determine first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage, propagate the first decoded bit values through a subset of the multiple decoding stages and store first partial sums, determined from the propagated first decoded bit values, in first memory elements of a memory. The decoder is further configured to determine second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values, and propagate the second decoded bit values through a subset of the multiple decoding stages and store second partial sums, determined from the propagated second decoded bit values, in the memory, wherein the stored second partial sums are to consume memory space gained by releasing the first memory elements.

By using memory space gained by releasing the first memory elements, an overall amount of memory required for decoding can be kept below a pre-determined value and constant after having determined the first decoded bit values. The releasing may be performed when a statistical value/partial sum is no longer needed for carrying out the remaining steps of the decoding procedure and before a new statistical value/partial sum has been determined that requires the memory space gained by the releasing.

In other words, the decoding procedure may perform decoding steps in an order which minimizes the amount of required memory by selecting a next decoding step from a set of possible by decoding steps such that statistical values and partial sums required for decoding a next bit value are prioritized while statistical values and partial sums not required for decoding the next bit value are delayed until the next bit value has been decoded. By this, a "decoding path" winding past the kernel units of the decoding stages may be generated/followed which ensures a "just-in-time" calculation of statistical values and partial sums, while the bit values are successively decoded.

In this regard, it is noted that the term "decoding stage" as used throughout the description and claims in particular refers to hardware, software, or a combination of hardware and software which implements a plurality of kernel units of size p, wherein each kernel unit corresponds to a kernel T p and performs an update on statistical values and/or partial sums. In case of updating statistical values, a kernel unit may take p statistical values and p partial sums as input and output p statistical values. In case of updating partial sums, a kernel unit may take p partial sums as input and output p partial sums.

Moreover, the term "memory element" as used throughout the description and claims in particular refers to a register or an address of a physical memory device such as, for example, a random-access memory (RAM). In particular, the size of a memory element may correspond to the size of the format in which statistical values/partial sums are stored.

In a first possible implementation form of the decoder according to the first aspect, the decoder is configured to store output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages, in memory elements of the memory. Furthermore, the decoder is configured to replace a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.

Hence, statistical values that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements.

In a second possible implementation form of the decoder according to the first aspect, a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding decoding stage to the succeeding decoding stage.

Thus, kernel units may be operated successively, thereby further reducing hardware requirements, as hardware may be used to successively perform calculations of different kernel units.

In a third possible implementation form of the decoder according to the first aspect, different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values.

For instance, a first decoding stage may comprise e kernel units, wherein each kernel unit of the first decoding stage is to receive / input statistical values, while a second decoding stage may comprise g kernel units, wherein each kernel unit of the second decoding stage is to receive h input statistical values with e f = g h. Each kernel in the transformation matrix may have a kernel unit assigned thereto, wherein the number of input values of a kernel unit corresponds to the size of the kernel to which the kernel unit is assigned.

This structure may facilitate applying/adapting the decoding procedure to different multi- kernel codes.

In a fourth possible implementation form of the decoder according to the first aspect, the number of input statistical values of a kernel unit is two, three, five, or more.

For example, each kernel unit of the first decoding stage may be to receive two input statistical values while each kernel unit of the second decoding stage may be to receive three input statistical values, or vice versa.

In a fifth possible implementation form of the decoder according to the first aspect, a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage. For instance, if a penultimate decoding stage comprises kernel units of size three and the memory comprises two memory elements dedicated to storing the partial sums of the ultimate decoding stage, there may be six memory elements dedicated to storing the partial sums of the penultimate decoding stage. In a sixth possible implementation form of the decoder according to the first aspect, the decoder is configured to sequentially overwrite the first partial sums with the second partial sums.

Hence, partial sums that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements for storing the second partial sums. In particular, the memory may be adapted to only provide memory space for either the first or second partial sums/statistical values.

In a seventh possible implementation form of the decoder according to the first aspect, the statistical values are one of log-likelihood ratios, likelihood ratios, or likelihoods. According to a second aspect of the present invention, there is provided a method of sequentially decoding a polar code. The method comprises propagating statistical values representing initial estimates of codeword bits received via a noisy channel through multiple decoding stages comprising multiple kernel units representing polar code kernels of different sizes. Each kernel unit is configured to determine an output statistical value based on one or more input statistical values, wherein the output statistical value of a kernel unit of a preceding decoding stage serves as an input statistical value of a kernel unit of a succeeding decoding stage.

The method further comprises determining first decoded bit values based on output statistical values of a kernel unit of an ultimate decoding stage, propagating the first decoded bit values through a subset of the multiple decoding stages and storing first partial sums determined from the propagated first decoded bit values in first memory elements of a memory of a decoder. The method further comprises determining second decoded bit values based on the first stored partial sums and at least some of the propagated statistical values, propagating the second decoded bit values through a subset of the multiple decoding stages and storing second partial sums determined from the propagated second decoded bit values in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements.

The method may involve the decoder according to the first aspect and achieve the same/similar advantages. Hence, any disclosure herein relating to the decoder is intended to also relate to the method and vice-versa, unless specified otherwise, or inapplicable. In a first possible implementation form of the method according to the second aspect, the method comprises storing output statistical values of kernel units involved in propagating the statistical values through the multiple decoding stages in memory elements of the memory and replacing a first output statistical value which is based on the one or more input statistical values with a second output statistical value which is based on the one or more input statistical values.

Hence, as indicated above, statistical values that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements.

In a second possible implementation form of the method according to the second aspect, a number of kernel units involved in propagating the statistical values to determine the first decoded bit values decreases from the preceding stage to the succeeding stage.

Thus, as indicated above, kernel units may be operated successively, thereby further reducing hardware requirements, as hardware may be used to successively perform calculations of different kernel units. In a third possible implementation form of the method according to the second aspect, different decoding stages comprise different numbers of kernel units, wherein kernel units of the different decoding stages differ in regard to the number of input statistical values. As indicated above, this structure may facilitate applying/adapting the decoding procedure to different multi-kernel codes.

In a fourth possible implementation form of the method according to the second aspect, the number of input statistical values of a kernel unit is two, three, five, or more. In a fifth possible implementation form of the method according to the second aspect, a number of memory elements dedicated to storing partial sums of an ultimate decoding stage is smaller than a number of memory elements dedicated to storing partial sums of a penultimate decoding stage, wherein a ratio of the numbers is equal to a number of input values of a kernel unit of the penultimate decoding stage. In a sixth possible implementation form of the method according to the second aspect, the method comprises sequentially overwriting the first partial sums with the second partial sums.

Hence, as indicated above, partial sums that are no longer needed in the decoding procedure may be overwritten, thereby reducing the overall requirement for memory space, as compared to reserving/allocating further memory elements for storing the second partial sums. In particular, the memory may be adapted to only provide memory space for either the first or second partial sums/statistical values.

In a seventh possible implementation form of the method according to the second aspect, the statistical values are one of log-likelihood ratios, likelihood ratios, or likelihoods.

Furthermore, some or all steps of the method may be performed by a processor in accordance with instructions persistently stored on a tangible machine-readable medium according to a third aspect of the present disclosure.

BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 shows a block diagram illustrating a generic digital communication system in which elements of the present disclosure may be implemented. Fig. 2 shows a flow-chart of steps of a sequential decoding procedure. Fig. 3 shows a block diagram of a kernel unit 30.

Fig. 4 illustrates a transformation matrix (a), a Tanner graph (b), and a memory structure (c) of a decoding procedure for a polar code governed by G 12 = T 2 (g) T 2 (g) T 3 .

Fig. 5 illustrates a transformation matrix (a), a Tanner graph (b), and a memory structure (c) of a decoding procedure for a polar code governed by G 12 = T 2 (g) T 3 (g) T 2 .

Fig. 6 illustrates a transformation matrix (a), a Tanner graph (b), and a memory structure (c) of a decoding procedure for a polar code governed by G 12 = T 3 (g) T 2 (g) T 2 .

DETAILED DESCRIPTION

The following provides a non-limiting example of a sequential decoding procedure for multi- kernel polar codes. The example illustrates the sequential decoding procedure for kernels of size 2 and 3 which allow constructing multi-kernel polar codes with a block length of N = 2 n 2 . 3 n 3 However, the sequential decoding procedure may also involve kernels of other sizes. As classical polar codes (having only kernels of the same size) are a subclass of multi- kernel polar codes, the proposed sequential decoding procedure may also be used to decode classical polar codes.

Fig. 1 shows a block diagram illustrating a generic digital communication system 10 in which the proposed sequential decoding procedure may be implemented. The system 10 includes a transmitting side, comprising an encoder 12, and a receiving side, comprising a decoder 14. The input of the encoder 12 at the transmitting side may be an encoding vector u of length N comprising K information bits and N— K "frozen bits," from which the decoder 12 calculates a codeword x.

The codeword x may be forwarded to a modulator 16 which may transform the codeword x into a modulated signal vector CH_IN. The modulated signal vector CH_IN may in turn be transmitted through a channel 18 such as, for example, a wired or wireless channel to a demodulator 20. Since the channel 18 is usually subject to noisy disturbances, the channel output CHjOUT may differ from the channel input CH_IN. At the receiving side, the channel output vector CHjOUT may be processed by the demodulator 20. The demodulator 20 may produce statistical values such as, for example, log- likelihood ratios (LLRs), likelihood ratios (LRs), or likelihoods (Ls) which are to indicate probabilities with which the channel output vector CHjOUT corresponds to a particular bit sequence. The decoder 14 may use the redundancy in the codeword x in a sequential decoding procedure to decode the K information bits.

The encoding and decoding procedures may be governed by a multi-kernel polar code. The encoding procedure in the encoder 12 may hence be based on rows of a transformation matrix G N . The proposed sequential decoding procedure in the decoder 14 may comprise the steps shown in Fig. 2, which may be implemented by customized hardware (e.g., an FPGA) or a processor.

At step 20, statistical values may be propagated through multiple decoding stages 40-42 (cf. Fig. 4b, in which the statistical values are LLRs). Each decoding stage may comprise one or more multiple kernel units 30 (cf. Fig. 3) of the same size. Each kernel unit 30 may represent a kernel of the transformation matrix G N .

As shown in Fig. 3, a kernel unit 30 may be configured to determine output statistical values based on input statistical values. For example, a kernel unit of size p may perform an update of the LLRs and the PSs. In case of updating LLRs, the kernel unit 30 may take p LLRs and p PSs as input and output p LLRs, wherein the output LLRs may be used for decoding different information bits. In case of updating PSs, the kernel unit may take p PSs as input and output p PSs.

In Fig. 3, the input for updating the LLRs is received on the right side of the kernel unit 30 while the output is provided on the left side of the kernel unit 30. For updating PSs, the input is received on the left side of the kernel unit 30 and the output is provided on the right side of the kernel unit 30. The operations to be performed on the input depend on the transformation matrix T p defining the kernel and the position of the information bit to be decoded.

As an example, the update operations performed by two kernels of sizes two and three are described in the following: The process implemented by a kernel unit of a kernel T 2 = when used in the decoding procedure may be described as follows: The LLR update function of the kernel unit takes two LLRs, L 0 and L l 5 and two PSs, x 0 and 1 ; as input and calculates two different = L 0 EB h and λ 1 = (— 1) X °L 0 + L 1 , with | έ =

2 tanh '1 (tanh In a similar manner,

the PS update function of the kernel unit takes two PSs, u 0 and i^ , as input and calculates two different PSs IS Q U.Q φ u and

/I 1 i\

The process implemented by a kernel unit of a kernel T 3 = I 1 0 1 ) when used in

VO 1 1/

the decoding procedure may be described as follows: The LLR update function of the kernel unit takes three LLRs, L 0 , L l5 and L 2 , and three PSs, x 0 , 1 ; and x 2 , as input and calculates three different LLRs as λ 0 = L 0 H h H L 2 , λ 1 = (— 1) X °L 0 + ffl L 2 and l 2 = ( + (— 1) ½ ® :¾1 L2 . In a similar manner, the PS update function of the kernel unit takes three PSs, x 0 , x 1 , and x 2 , as input and calculates three different PSs as XQ — 1 .Q I .^ , x^— 1 .Q 1 .2 and 2 ~ ^2 · As shown in Fig. 4b, illustrating the Tanner graph of a polar code governed by a transformation matrix of the form G 12 = T 2 <£) T 2 <£) T 3 , the output statistical values of the kernel units 30 of a preceding decoding stage 40 serve as input statistical values of the kernel units 30 of a succeeding decoding stage 41.

At step 22 of Fig. 2, the procedure is continued by determining first decoded bit values u t (e.g., i = {0,1,2} ) based on output statistical values of a kernel unit 30 of an ultimate decoding stage 42.

At step 24, the first decoded bit values u t are propagated through a subset of the multiple decoding stages (e.g., decoding stage 42, or decoding stage 42 and decoding stage 41) and first partial sums (PSs), determined from the propagated first decoded bit values u t , are stored in first memory elements 43, as illustrated, for example, in Fig. 4c.

At step 26, the procedure involves determining second decoded bit values u t (e.g., i = {3,4,5}) based on the first stored partial sums and at least some of the propagated statistical values. At step 28, the second decoded bit values u t are propagated through a subset of the multiple decoding stages and second partial sums, determined from the propagated second decoded bit values u are stored in the memory, wherein the stored second partial sums consume memory space gained by releasing the first memory elements. As shown in Fig. 4 to Fig. 6, which illustrate transformation matrices, Tanner graphs, and memory structures of different transformation matrices G 12 , the LLRs and PSs may be stored in different memory structures. In particular, the LLRs may be stored as real numbers, while the PSs may be stored as bits. As shown in Fig. 4(c), Fig. 5(c), and Fig. 6(c), the memory structures depend on the order of the kernels defining the transformation matrix G N = T n (8) ... <8> r ns .

For example, the LLRs may be stored in s + 1 real vectors of different sizes (where s is the number of kernels of the transformation matrix G N ). The length of the first vector may always be 1 (as indicated by the leftmost square in the lower part of Fig. 4(c), Fig. 5(c), and Fig. 6(c)). The length of the i-th vector may be given by the product of the last i — 1 kernel sizes, i.e., the length of the i-th vector may be given by n s _ i+2 1 ... 1 n s .

The PSs may be stored in s binary matrices of different sizes (in the following, width and height of a matrix may be referred to by the number of columns and rows, respectively). The width of the i-th PS matrix may be given by the size of the (s— i + l)-th kernel, i. e., the width of the i-th PS matrix may be given by n s _ i+1 . The height of the i-th PS matrix may be given by the product of the last i— 1 kernel sizes, i.e., the height of the i-th PS matrix may be given by n s _ i+2 1 ... 1 n s . The size of the last matrix may not follow this rule, but its width may be diminished by one, i.e., the last matrix may have a width of n x — 1.

The decoded bits may be stored in a binary vector U of length N or may be immediately output, once the decoded bits are no longer required in the remaining steps of the decoding procedure.

In the following, exemplary update rules for LLR, PS and U vectors and matrices are provided. It is started with the update rules for the LLR vectors, followed by the update rules for the binary vector U , and finally, update rules for the PS matrices are described. It is recalled, that the update rules are to be applied for every decoded bit. Moreover, the update may be performed by the kernel units 30. Initially, the (s + l)-th LLR vector (the rightmost in Fig. 4(c), Fig. 5(c), and Fig. 6(c)) may be filled with the N LLRs of the received symbol(s), while all other entries of LLR vectors, PS matrices and U vector may be set to zero.

LLR UPDATE

In order to perform the LLR update phase for bit i^, i may be expressed through a mixed radix numeral system which is based on the kernel order. A mixed radix numeral system is a non-standard positional numeral system in which the numerical base depends on the digit position (as in the case of time, which is measured in hours, minutes, and seconds).

The radix of the numeral system is given by the sizes of the kernels used to build the transformation matrix G N . As an example, the conversions for the transformation matrices G 12 shown in Fig. 4(a), Fig. 5(a), and Fig. 6(a) are given in the following tables:

The LLR update in Fig. 4(b)(c), Fig. 5(b)(c), and Fig. 6(b)(c) proceeds from right to left. The vectors may be updated starting from the s-th vector using the subsequent vector. In general, the j-th vector may be updated using the (J + l)-th vector and the kernel T n +1 . Entries of the j-th vector may be updated one by one using n s _ j+1 entries from the (j + l)-th LLR vector and from the j-th PS matrix. The LLR update rule may be selected in accordance with the mixed radix representation of i. If bit Ui has to be decoded in accordance with the mixed radix representation i = b 1 b 2 ... b s , the LLR update rule b . may be used to update the j-th vector. In order to update the k-th entry of the j-th vector, the n s _ j+1 LLRs to be used by the update formula may be selected as the entries (k— l)n s _ j+1 + 1, ... , kn s _ j+1 of the (J + l)-th vector. Similarly, the n s _ j+1 PSs to be used by the update formula may be selected as the n s _ j+1 entries of the k-t row j-t matrix. Since the s-th matrix has only n s _ j+1 — 1 columns, the missing bits may be considered as zeros.

Using said LLR update algorithm, s LLR vectors have to be updated for every decoded bit. However, as the mixed radix expansions of two consecutive numbers differ only on the right of the position of the rightmost nonzero element of the second number, it is not necessary to update the vectors represented by the positions at the left of the rightmost nonzero entry. Of course, this does not apply for the case i = 0, in which all the vectors have to be updated.

Ui UPDATE

If the decoded bit Ui is not frozen, its value may be obtained through a hard decision on the only element of the unitary first LLR vector. Otherwise, if Ui is frozen, it may be set to zero.

PS UPDATE

PS matrices may be updated per columns, in increasing order from left to right (cf. Fig. 4(b)(c), Fig. 5(b)(c), and Fig. 6(b)(c)). When the last column of a PS matrix is filled, a column of the next PS matrix may be updated. Updates may always start from the first PS matrix, which is a column vector of width n s . The value of the decoded bit Ui may be copied in the column b s .

When b s = n s — 1, the matrix is filled, and the column £) s _ x of the second matrix may be updated. If b j = ri j — 1, a matrix is filled and the next matrix may be updated, otherwise the process ends. When the j-th matrix is filled, i.e. if b s _ j+1 = n s _ j+1 — 1, the column b s _ j of the (j + l)-th matrix may be updated. In this case, each row of the j-t matrix may be used to update a part of the column b s _ j of the (J + l)-th matrix. In particular, the n s _ j+1 PS update rules of the kernel T n +1 with the k -th row of j -th matrix as input may be used to update the rows (k— l n s _ j+1 + 1, ... , kn s _ j+1 of the column b s _ j of the (j + l)-th matrix. As an exception, for the last bit, the PS update step may not be executed, since no new LLR update step is required.

The proposed sequential SC decoding procedure permits to reduce the amount of memory required for decoding multi-kernel polar codes. While a recursive implementation of the SC decoder for a multi-kernel polar code with transformation matrix G N = T n (8)■■■ <8) T n with N = ... n s requires storing N (s + 1) LLRs and N s PSs, the proposed sequential SC decoding procedure only requires to store (... (n s + 1) + 1) ... ) + 1 LLRs and (... (n s s _ ! + 1) n s _ 2 + !) ... ) n PSs:

Thus, the proposed sequential SC decoding procedure simplifies decoding and allows significantly reducing the memory requirement as compared to recursive decoding procedures.




 
Previous Patent: INSECT TRAPPING APPARATUS

Next Patent: HOUSEHOLD APPLIANCE