Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMBINED BELIEF PROPGATION (BP) AND ORDERED STATISTICS DECODING (OSD) FOR CONCATENATED CODES
Document Type and Number:
WIPO Patent Application WO/2020/151835
Kind Code:
A1
Abstract:
An apparatus is provided for decoding a received transmission that was encoded with a concatenated code including an inner code and an outer code. The apparatus comprises at least one processor that is configured to apply iterative belief propagation, BP, decoding to a sequence of soft bit values representative of a received signal. An i:th iteration round of said iterative BP decoding produces an i:th ordered sequence of soft bit values indicative of i:th set of most reliable bits. An i:th set of most reliable independent positions, MRIPs, is found based on a generator matrix and said i:th set of most reliable bits. Bits in said i:th set of MRIPs are re-encoded, thus producing an i:th set of candidate vectors. Said i:th set of candidate vectors are compared to one or more stopping criteria. One of said candidate vectors can be accepted as a received bit sequence if at least one of said stopping criteria is met. A new iteration round of said iterative BP decoding is begun if none of said stopping criteria is met. The generator matrix being an overall generator matrix that generates the concatenated code.

Inventors:
LINDOFF BENGT (SE)
ZHOU WEI (SE)
LENTMAIER MICHAEL (SE)
SEMENOV SERGEI (SE)
HU WENQUAN (SE)
Application Number:
PCT/EP2019/051879
Publication Date:
July 30, 2020
Filing Date:
January 25, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECH CO LTD (CN)
LINDOFF BENGT (SE)
International Classes:
H03M13/29; H03M13/11; H03M13/37; H03M13/45
Foreign References:
CN109981112A2019-07-05
US20150006992A12015-01-01
Other References:
YUEJUN WEI ET AL: "A CRC-Aided Hybrid Decoding Algorithm for Turbo Codes", IEEE WIRELESS COMMUNICATIONS LETTERS, vol. 2, no. 5, 4 June 2013 (2013-06-04), pages 471 - 474, XP055199699, ISSN: 2162-2337, DOI: 10.1109/WCL.2013.052813.130259
GOUNAI S ET AL: "Lowering Error Floor of Irregular LDPC Codes by CRC and OSD Algorithm", IEICE TRANSACTIONS ON COMMUNICATIONS, COMMUNICATIONS SOCIETY, TOKYO, JP, vol. E89-B, no. 1, 1 January 2006 (2006-01-01), pages 1 - 10, XP001502746, ISSN: 0916-8516, DOI: 10.1093/IETCOM/E89-B.1.1
NOKIA ALCATEL-LUCENT SHANGHAI BELL: "Near optimal performance for LDPC with OSD/List decoding", vol. RAN WG1, no. Reno, U.S.A.; 20161114 - 20161118, 13 November 2016 (2016-11-13), XP051176982, Retrieved from the Internet [retrieved on 20161113]
ROBERT J MCELIECE ET AL: "Turbo Decoding as an Instance of Pearl's "Belief Propagation" Algorithm", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, US, vol. 16, no. 2, 1 February 1998 (1998-02-01), XP011054759, ISSN: 0733-8716
"Factor graphs and the sum-product algorithm", IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 47, no. 2, 28 February 2001 (2001-02-28), pages 498 - 512, XP055270448, DOI: 10.1109/18.910572
Attorney, Agent or Firm:
KREUZ, Georg (DE)
Download PDF:
Claims:
CLAIMS

1. An apparatus for decoding a received transmission that was encoded with a concatenated code including an inner code and an outer code, the apparatus comprising: at least one processor configured to:

apply iterative belief propagation, BP, decoding to a sequence of soft bit values representative of a received signal, an i:th iteration round of said iterative BP decoding producing an i:th ordered sequence of soft bit values indicative of i:th set of most reliable bits;

find an i:th set of most reliable independent positions, MRIPs, based on a generator matrix and said i:th set of most reliable bits;

re-encode bits in said i:th set of MRIPs, thus producing an i:th set of candidate vectors;

compare said i:th set of candidate vectors to one or more stopping criteria; and either accept one of said candidate vectors as a received bit sequence if at least one of said stopping criteria is met, or begin a new iteration of said iterative BP decoding if none of said stopping criteria is met.

2. The apparatus of claim 1, wherein said generator matrix is an overall generator matrix that generates the concatenated code.

3. The apparatus of claim 1, wherein:

said generator matrix is a generator matrix for generating said inner code, and the at least one processor is further configured to: perform a cyclic redundancy check for said i:th set of candidate vectors before said comparing of said i:th set of candidate vectors to said one or more stopping criteria, and compare only those of said i:th set of candidate vectors to said one or more stopping criteria that pass said cyclic redundancy check.

4. The apparatus of any one of claims 1 to 3, wherein the at least one processor is further configured to use at least one of the following as said one or more stopping criteria: a maximum number of iteration rounds per sequence of said iterative BP decoding has been reached; one of said candidate vectors is a codeword of the code generated by said generator matrix; an optimum candidate vector of said i:th set of candidate vectors is the same as an optimum candidate vector of a previous iteration round.

5. The apparatus of any one of claims 1 to 4, wherein the at least one processor is configured to: perform said re-encoding of bits in said i:th set of MRIPs only every L:th iteration round, where L is a positive integer, and go back to said iterative BP decoding for the i+l:th iteration round otherwise.

6. The apparatus of any one of claims 1 to 4, wherein the at least one processor is configured to: perform said re-encoding of bits in said i:th set of MRIPs only at first S iterations per sequence, where S is a positive integer, and go back to said iterative BP decoding for the i+l:th iteration round otherwise.

7. The apparatus of any one of claims 1 to 6, wherein the inner code is a low density parity check, LDPC, code, and the outer code is a cyclic redundancy check, CRC, code.

8. A method for decoding a received transmission that was encoded with a concatenated code including an inner code and an outer code, the method comprising:

applying iterative belief propagation, BP, decoding to a sequence of soft bit values representative of a received signal, an i:th iteration round of said iterative BP decoding producing an i:th ordered sequence of soft bit values indicative of i:th set of most reliable bits;

finding an i:th set of most reliable independent positions, MRIPs, based on a generator matrix and said i:th set of most reliable bits;

re-encoding bits in said i:th set of MRIPs, thus producing an i:th set of candidate vectors;

comparing said i:th set of candidate vectors to one or more stopping criteria; and either accepting one of said candidate vectors as a received bit sequence if at least one of said stopping criteria is met, or beginning a new iteration of said iterative BP decoding if none of said stopping criteria is met.

9. The method of claim 8, wherein said generator matrix is an overall generator matrix that generates the concatenated code.

10. The method of claim 8, wherein said generator matrix is a generator matrix for generating said inner code, and the method comprises:

performing a cyclic redundancy check for said i:th set of candidate vectors before said comparing of said i:th set of candidate vectors to said one or more stopping criteria, and

comparing only those of said i:th set of candidate vectors to said one or more stopping criteria that pass said cyclic redundancy check.

11. The method of any one of claims 8 to 10, wherein said one or more stopping criteria comprise at least one of the following: a maximum number of iteration rounds per sequence of said iterative BP decoding has been reached; one of said candidate vectors is a codeword of the code generated by said generator matrix; an optimum candidate vector of said i:th set of candidate vectors is the same as an optimum candidate vector of a previous iteration round.

12. The method of any one of claims 8 to 11, comprising: performing said re-encoding of bits in said i:th set of MRIPs only every L:th iteration round, where L is a positive integer, and

going back to said iterative BP decoding for the i+l:th iteration round otherwise.

13. The method of any one of claims 8 to 11, comprising: performing said re-encoding of bits in said i:th set of MRIPs only at first S iterations per sequence, where S is a positive integer, and going back to said iterative BP decoding for the i+l:th iteration round otherwise.

14. The method of any one of claims 8 to 13, wherein the inner code is a low density parity check, LDPC, code, and the outer code is a cyclic redundancy check, CRC, code.

5. A computer program product comprising processor executable instructions which, when executed by at least one processor, causes the at least one processor to perform the method of any of claims 8 to 14.

Description:
COMBINED BELIEF PROPGATION (BP) AND ORDERED STATISTICS DECODING

(OSD) FOR CONCATENATED CODES

TECHNICAL FIELD

The present disclosure relates generally to data encoding and decoding techniques. In particular the disclosure relates to an apparatus and method for decoding a received transmission that was encoded using a concatenated code, as well as to a corresponding computer program product.

BACKGROUND

The concept of concatenated coding refers to the practice of encoding the information to be transmitted with at least two codes in succession. In two-code concatenated coding the information is first encoded with one code, called the outer code. The result of this first encoding is then additionally encoded with another code, called the inner code, before transmitting the eventual result through a transmission channel towards a receiving device.

The receiving device receives the transmitted encoded information from the transmission channel in a distorted and/or incomplete form due to random noise and interference that the channel introduces to transmitted signals. Basically the task of the receiving device would be simply to use a first decoder to decode the inner code and then a second decoder to decode the outer code. However, since noise and interference in the transmission channel causes uncertainty of what the actual form of the received signal should be, the receiving device must also contain decision-making functionalities. They use the incomplete information that is available to decide, what would be the most likely form of original, transmitted information that resulted in the received incomplete information. Redundancy that the encoding created in the transmitting device helps the receiving device in its task, but it is not easy to know, what would be the most fruitful way of utilizing this redundancy. In other words, it is not obvious, what decoding strategy will yield the best results, taken the available processing resources of the receiving device, in decoding a transmission that was encoded in a particular way.

Thus, there is a need for a solution that provides a decoding approach that produces good decoding results and makes efficient use of the processing resources of a receiving device. Said approach should be particularly suited for decoding transmissions that were encoded with a concatenated code in the framework of 5G wireless communications.

SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the present disclosure, nor is it intended to be used to limit the scope of the present disclosure.

It is an object of the present disclosure to provide a method and an apparatus for decoding concatenated codes in an efficient manner that is applicable to various implementations in a versatile way.

The object above is achieved by the features of the independent claims in the appended claims. Further embodiments and examples are apparent from the dependent claims, the detailed description and the accompanying drawings.

According to a first aspect there is provided an apparatus for decoding a received transmission that was encoded with a concatenated code including an inner code and an outer code. The apparatus comprises at least one processor. The processor is configured to apply iterative belief propagation, BP, decoding to a sequence of soft bit values representative of a received signal, an i:th iteration round of said iterative BP decoding producing an i:th ordered sequence of soft bit values indicative of i:th set of most reliable bits. The processor is configured to find an i:th set of most reliable independent positions, MRIPs, based on a generator matrix and said i:th set of most reliable bits. The processor is configured to re encode bits in said i:th set of MRIPs, thus producing an i:th set of candidate vectors. The processor is configured to compare said i:th set of candidate vectors to one or more stopping criteria, and either accept one of said candidate vectors as a received bit sequence if at least one of said stopping criteria is met, or begin a new iteration of said iterative BP decoding if none of said stopping criteria is met.

In one implementation form of the first aspect said generator matrix is an overall generator matrix that generates the concatenated code. This involves at least the advantage that all decoding and re-encoding operations of the decoding apparatus only need to involve a single generator matrix.

In another implementation form of the first aspect said generator matrix is a generator matrix for generating said inner code, and the at least one processor is further configured to perform a cyclic redundancy check for said i:th set of candidate vectors before said comparing of said i:th set of candidate vectors to said one or more stopping criteria, and compare only those of said i:th set of candidate vectors to said one or more stopping criteria that pass said cyclic redundancy check. This involves at least the advantage that the generator matrix involved in decoding and re-encoding operations may have relatively small dimensions, which may have advantageous consequences in the requirements of storage capacity and processing capability.

In another implementation form of the first aspect the at least one processor is further configured to use at least one of the following as said one or more stopping criteria: a maximum number of iteration rounds per sequence of said iterative BP decoding has been reached; one of said candidate vectors is a codeword of the code generated by said generator matrix; an optimum candidate vector of said i:th set of candidate vectors is the same as an optimum candidate vector of a previous iteration round. This involves at least the advantage that the iteration can be made to stop in ways that take into account the various cases that can be encountered in decoding actual signals.

In another implementation form of the first aspect the at least one processor is configured to perform said re-encoding of bits in said i:th set of MRIPs only every L:th iteration round, where L is a positive integer, and go back to said iterative BP decoding for the i+1 :th iteration round otherwise. This involves at least the advantage that savings can be achieved in required processing power and/or processing time, because the number of re-encoding operations becomes smaller.

In another implementation form of the first aspect the at least one processor is configured to perform said re-encoding of bits in said i:th set of MRIPs only at first S iterations per sequence, where S is a positive integer, and go back to said iterative BP decoding for the i+l:th iteration round otherwise. This involves at least the advantage that savings can be achieved in required processing power and/or processing time, because the number of re encoding operations becomes smaller.

In another implementation form of the first aspect the inner code is a low density parity check, LDPC, code, and the outer code is a cyclic redundancy check, CRC, code. This involves at least the advantage that simulations have proven very advantageous performance of the decoding apparatus when these codes are involved.

According to a second aspect there is provided a method for decoding a received transmission that was encoded with a concatenated code including an inner code and an outer code. The method comprises applying iterative belief propagation, BP, decoding to a sequence of soft bit values representative of a received signal, an i:th iteration round of said iterative BP decoding producing an i:th ordered sequence of soft bit values indicative of i:th set of most reliable bits. The method comprises finding an i:th set of most reliable independent positions, MRIPs, based on a generator matrix and said i:th set of most reliable bits. The method comprises re-encoding bits in said i:th set of MRIPs, thus producing an i:th set of candidate vectors. The method comprises comparing said i:th set of candidate vectors to one or more stopping criteria; and either accepting one of said candidate vectors as a received bit sequence if at least one of said stopping criteria is met, or beginning a new iteration of said iterative BP decoding if none of said stopping criteria is met.

In one implementation form of the second aspect said generator matrix is an overall generator matrix that generates the concatenated code. This involves at least the advantage that all decoding and re-encoding operations of the decoding method only need to involve a single generator matrix.

In another implementation form of the second aspect said generator matrix is a generator matrix for generating said inner code, and the method comprises performing a cyclic redundancy check for said i:th set of candidate vectors before said comparing of said i:th set of candidate vectors to said one or more stopping criteria, and comparing only those of said i:th set of candidate vectors to said one or more stopping criteria that pass said cyclic redundancy check. This involves at least the advantage that the generator matrix involved in decoding and re-encoding operations may have relatively small dimensions, which may have advantageous consequences in the requirements of storage capacity and processing capability required to execute the method.

In another implementation form of the second aspect said one or more stopping criteria comprise at least one of the following: a maximum number of iteration rounds per sequence of said iterative BP decoding has been reached; one of said candidate vectors is a codeword of the code generated by said generator matrix; an optimum candidate vector of said i:th set of candidate vectors is the same as an optimum candidate vector of a previous iteration round. This involves at least the advantage that the iteration can be made to stop in ways that take into account the various cases that can be encountered in decoding actual signals.

In another implementation form of the second aspect the method comprises performing said re-encoding of bits in said i:th set of MRIPs only every L:th iteration round, where L is a positive integer, and going back to said iterative BP decoding for the i+l:th iteration round otherwise. This involves at least the advantage that savings can be achieved in processing power required to execute the method in a given time, because the number of re-encoding operations becomes smaller.

In another implementation form of the second aspect the method comprises performing said re-encoding of bits in said i:th set of MRIPs only at first S iterations per sequence, where S is a positive integer, and going back to said iterative BP decoding for the i+l:th iteration round otherwise. This involves at least the advantage that savings can be achieved in processing power required to execute the method in a given time, because the number of re encoding operations becomes smaller.

In another implementation form of the second aspect the inner code is a low density parity check, LDPC, code, and the outer code is a cyclic redundancy check, CRC, code. This involves at least the advantage that simulations have proven very advantageous performance of the decoding method when these codes are involved.

According to a third aspect there is provided a computer program product comprising processor executable instructions which, when executed by at least one processor, causes the at least one processor to perform a method of a kind described above. Other features and advantages of the present disclosure will be apparent to those skilled in the art upon reading the following detailed description and reviewing the accompanying drawings.

BRIEF DESCRIPTION OF THE DRAWINGS

In the following, examples are described in more detail with reference to the attached figures and drawings, in which:

Fig. 1 shows concepts of encoding, transmitting, receiving, and decoding;

Fig. 2 shows an example of a parity check matrix;

Fig. 3 shows an example of a Tanner graph, corresponding to the parity check matrix of fig.

2,

Fig. 4 shows an example of applying ordered statistical decoding to a concatenated code; Fig. 5 shows an example of applying iterative, reliability based ordered statistical decoding to a concatenated code,

Fig. 6 shows another example of applying iterative, reliability based ordered statistical decoding to a concatenated code,

Fig. 7 shows another example of applying iterative, reliability based ordered statistical decoding to a concatenated code,

Fig. 8 shows a performance comparison of three coding schemes, and

Fig. 9 shows an example of an apparatus.

DETAIFED DESCRIPTION

Various embodiments of the present disclosure are further described in more detail with reference to the accompanying drawings. However, the present disclosure can be embodied in many other forms and should not be construed as limited to any certain structure or function disclosed in the following description. In contrast, these embodiments are provided to make the description of the present disclosure detailed and complete.

According to the present disclosure, it will be apparent to ones skilled in the art that the scope of the present disclosure covers any embodiment, which is disclosed herein, irrespective of whether this embodiment is implemented independently or in concert with any other embodiment of the present disclosure. For example, the apparatus and method disclosed herein can be implemented by using any numbers of the embodiments provided herein. Furthermore, it should be understood that any embodiment of the present disclosure can be implemented using one or more of the elements or steps presented in the appended claims.

As used herein, the term“concatenated code” refers to an error-correcting code that is derived by concatenating or combining two or more simpler codes in order to achieve good performance with reasonable complexity. More specifically, the concatenated code consists of inner codes and outer codes. As an example the inner code may be a low density parity check (LDPC) code, while the outer codes may be a cyclic redundancy check (CRC) code.

Fig. 1 illustrates some concepts of concatenated coding when information is transmitted through a channel and received. The original information to be transmitted consists of an information sequence u. A first encoder 101 in the transmitting device applies an outer code Cout and a second encoder 102 applies an inner code Cm, so that the sequence to be transmitted through the channel 103 is v. The channel 103 may introduce noise and/or interference, for which reason the sequence r that the receiving device receives is not quite the same as the transmitted sequence v. If it is close enough, the receiving device may use a first decoder 104 to decode the inner code Cm and a second decoder 105 to decode the outer code Cout, so that the result is an estimated sequence u that is as close as possible to the original information sequence u. Since the receiver knows exactly the form of the inner and outer codes Cm and Cout, the estimated sequence u is the better, the closer the received sequence r is to the transmitted sequence v. By using the known properties of the codes, the receiving device can process a received sequence r so that an optimal estimate v opt thereof is obtained, from which the decoding then gives the best possible estimate u.

We may consider a binary information sequence u = [ui, m, .. . uk] that is encoded into a sequence to be transmitted v = [ vi , V2, ... v«] using a code C given by generator matrix G, the dimensions of which are K by N. The coding may be expressed as the matrix equation v = uG, and the sequence v is referred to as the codeword of sequence u . There are 2 k possible binary information sequences of length k, so correspondingly there are 2 k valid codewords specified by the generator matrix G.

Given the generator matrix G there exists a matrix H of dimensions N-K by N, where each row of H is orthogonal to the row space of G. The matrix H is referred to as the parity check matrix (PCM) of the code C. Every codeword v of the code C satisfies vH T = 0, where H T is the transpose of H. A low density parity check code (LPDC code) has the characteristic that its parity check matrix H is sparse. An example of a parity check matrix is shown in fig. 2.

LDPC codes can also be represented by so-called Tanner graphs. A Tanner graph is a bipartite graph consisting of variable nodes (VNs) and check nodes (CNs). The VNs may also be referred to as digit nodes or bit nodes, and the CNs may also be referred to as parity nodes. An example of a Tanner graph that corresponds to the parity check matrix of fig. 2 is shown in fig. 3. The VNs are in the upper line in fig. 3, and the CNs are in the lower line.

By comparing figs. 2 and 3 it is easy to see that the columns of the parity check matrix H correspond to connections of the VNs, and the rows of the parity check matrix H correspond to connections of the CNs. For example in the first column of the parity check matrix H of fig. 2 the first three entries are ones. Thus the first VN 301 is connected to the first three CNs 311, 312, and 313 in the Tanner graph of fig. 3. In general if the entry at the crossing of the i:th row and j:th column of H is one, then in the corresponding Tanner graph the i:th CN and the j:th VN are connected. When a VN and a CN are connected in a Tanner graph, they may be referred to as neighbouring nodes. The number of VNs in the Tanner graph is the length N of the code, and the number of CNs is the number of parity bits in the code, marked with M. If the parity check matrix H has full rank, the rank of the corresponding code can be defines as (N-M)/N.

The elements of a codeword v are binary values. For a received sequence r it can be said that for the n:th received value r n there is a first conditional probability P(v n = 1 |r) that the original value of the corresponding transmitted bit v« was 1, and a second conditional probability P(v«=0 |r) that the original value was 0. The n:th element r n of the received sequence can be represented as the so-called log-likelihood ratio where In means the natural logarithm. The log-likelihood ratio can also be called the L- value, and it is an example of a so-called soft value used in reception. A positive L-value means that the corresponding codeword bit v« is more likely to be zero than one, while a negative L-value means that the corresponding codeword bit v« is more likely to be one than zero. An L-value which is close to zero (on either positive or negative side) means that there is large uncertainty concerning the true value of the corresponding codeword bit v«, while a large absolute value of the L-value means that the true value of the corresponding codeword bit v« is known at a relatively high certainty.

The principle of ordered statistic decoding (OSD) works as follows. The vector r of soft output values (L- values) r n , n e { 1, 2,..., N} that represent a received sequence are sorted in a decreasing order of reliability, so that the resulting vector is r’. This sorting defines a permutation pi so that r’ = 711(7·) and |r’i| > |rT| > . . . > |G N| . A matrix G’ can be obtained by taking the same permutation of the columns of the original matrix G; i.e. G’ = 711(G).

A matrix G” is then formed as follows. Select those K independent columns of G’ for which there are the largest L- values in r’, and put them in the order determined by the decreasing order of magnitude of said L-values, to form the first K columns of G”. The remaining N-K columns of G’ become the next N-K columns of G” in the order defined by the decreasing order of magnitude of the corresponding L-values in r’. A permutation 712 is defined so that G ,, = m{G , ) = 712(711 (G)). Gaussian elimination is then performed on G” to obtain the systematic form G sys in which the first K columns constitute an identity matrix of dimensions K by K.

The newly found permutation 712 can be performed on the re-ordered sequence r’ to obtain a sequence r” so that r” = 71:2(7·’) = 712(711(7·)). The elements of this new sequence are still soft values (L-values), but a corresponding binary sequence u” (of length K, where K<N) can be obtained by taking the bitwise hard decisions on the first K elements of the sequence r”. The binary sequence u” corresponds to the most reliable bit positions (MRIPs) of the sequence r”. Due to the way it was derived, the binary sequence u” depends on both the original received sequence r and the original generator matrix G.

The next part of the OSD algorithm is the use of test patterns that have the length of K bits and increasing Hamming weight up to a limit t, which is called the OSD order. There is only one (trivial) test pattern of weight 0, namely [0, 0, 0, 0,..., 0], and K test patterns of weight 1, namely [1, 0, 0, 0,..., 0], [0, 1, 0, 0,..., 0], ... [0, 0, 0, 0,..., 1]. Each test pattern is summed to the binary sequence u” in turn, and a list of candidate codewords v is constructed by re- encoding each of these sums with G sys . A search for the most optimum candidate codeword is performed to find the candidate codeword v” opt for which (1-2 v” opt ) has the closest Euclidean distance to r”. Inverse permutations are performed to find v opt = pi _1 (p2 A (v” opt ), which is then the OSD decoder output.

The Sum-Product algorithm, also known as the belief propagation (BP) decoding algorithm, works as follows. Given an LDPC code specified by a parity check matrix and the corresponding Tanner graph consisting of M CNs and N VNs, define N(m) as the set of VNs that have a connection to the m:th CN, and M(n) as the set of CNs that have a connection to the n:th VN. In the SP (or BP) algorithm there are two types of processing units: VN processing units and CN processing units. Each of these processing units take in L-values from the neighbouring nodes, update them, and pass them back to the same neighbouring nodes. Denote by T mn and Lnm the L-values that passed from CN m to VN n, and from VN n to CN m, respectively. The SP (or BP) algorithm computes these two quantities iteratively, and the L- value F n for every CN n in n e { 1 , 2,.. ,N } is updated in each iteration. After each iteration a vector v is obtained by making hard decisions on the values F n , so that if F n >0 then v n = 0, otherwise v n = 1. If it is found that vH T = 0 the algorithm can be stopped, otherwise another iteration is performed.

The OSD order has a significant influence on the number of required re-encodings, and hence the required processing capacity and/or processing time. The number of re-encodings increases essentially exponentially with the OSD order. Another important factor is the dimension of the code. For an LDPC code of dimension larger than a hundred bits, only relatively low OSD orders can be practically implemented.

The concept of iterative reliability based OSD (IRB-OSD) combines principles of BP decoding with OSD. Again, we assume that a binary information sequence u is encoded with an LDPC code and transmitted through the channel. The receiving device produces a corresponding L-value vector r and feeds it into the BP decoder. The soft values, or L-values, marked above as F n where n e { 1, 2,.. ,N} are updated in every iteration of the BP algorithm. We may denote with F^ the L-value for VN n after the i:th iteration. In IRB-OSD the values are considered as the reliability measures or most reliable bits, on the basis of which the MRIPs are found. Let Bk be the set of K MRIPs that were found using the as the reliability measures. Let u(Bk) be the hard decision values of the Fp 1 in these K positions. In resemblance with what was said above about OSD, ii(Bk) depends on both the F^ and the generator matrix of the LDPC code, which can be called GLDPC. Re-encoding of u(Bk) is done using GS^. LDPC, which is the systematic generator matrix of the LDPC code based on MRIPs, and summing with test patterns is used to generate a list of candidate codewords. The candidate codeword with the smallest Euclidean distance to r” is selected as the most optimal codeword vp ( t found so far. Suitable ending criteria can be applied to decide whether another iteration is needed or whether the algorithm can be halted and vp ( t be declared the final result.

Cyclic redundancy checking (CRC) is an error-detecting mechanism that can be used to detect changes in any binary sequences. Augmenting a sequence of bits with a CRC sequence can be considered encoding using a CRC code, and checking a sequence against its augmented CRC sequence can be considered decoding the CRC code. Fig. 4 illustrates an example in which a CRC code is used as the outer code in the first encoder 101, and an LDPC code is used as the inner code in the second encoder 102. The coding matrices of the CRC and LDPC codes are given as GCRC and GLDPC respectively.

We may assume that the length of the CRC code is m, so that the CRC coding matrix GCRC has dimensions K by K+m. The dimensions of the LDPC coding matrix GLDPC are K+m by N, so the overall code matrix may be expressed as G = GCRC GLDPC and it has dimensions K by N.

The receiving device of fig. 4 is configured to perform OSD decoding using the overall code matrix G. For a given received (soft-value) sequence r, there are found in the first decoder 401 the K MRIPs depending on both r and G. Hard decisions are made for the soft values in these K positions to give a binary vector M”, which is passed to the second decoder 402. It performs re-encoding using the matrix G sys , which is the systematic generator matrix of the overall code based on MRIPs, as well as the sums of the binary vector M” with test patterns. According to the principles of OSD the second decoder 402 eventually outputs the most promising candidate vector v opt , from which the best estimate ύ of the original bit sequence can be obtained. If the dimension of the LDPC code used like in fig. 4 is large, like a hundred bits for example, the processing capacity required to perform OSD of adequate order (to achieve optimum performance of the coding scheme) may become a bottleneck. Fig. 5 illustrates a decoding arrangement for concatenated codes with reasonable performance at a lower level of processing complexity.

In fig. 5 blocks 501 to 504 are parts of an apparatus for decoding a received transmission that was encoded with a concatenated code. In particular, the concatenated code includes an inner code used in the second encoder 102 of the transmitting device and an outer code used in the first encoder 101 of the transmitting device. As an example, the inner code may be a low density parity check, LDPC, code, and the outer code may be a cyclic redundancy check, CRC, code.

We may assume that the apparatus, parts of which are shown in fig. 5, comprises at least one processor that is configured to perform the tasks explained in the following. A processor, or a plurality of processors, may be configured to perform such tasks by providing it with one or more sets of one more machine-executable instructions that are stored on a memory and/or conveyed to the availability of the processor or plurality of processors through a communications connection.

In particular, said at least one processor is configured to apply iterative belief propagation decoding, also known as BP decoding, to a sequence r of soft bit values representative of a received signal. An i:th iteration round of said iterative BP decoding produces an i:th ordered sequence of soft bit values F^ l) indicative of so-called most reliable bits, which may be called the i:th set of most reliable bits. In other words, the soft values (L-values) F^ l where n e { 1, 2,.. ,N}, of VN n at the i:th iteration are updated in every iteration of the BP decoding. This functionality is represented by block 501 in fig. 1.

Block 502 illustrates how the at least one processor is configured to find an i:th set of most reliable independent positions, MRIPs, based on a generator matrix and said i:th set of most reliable bits. There are K MRIPs. They can be marked as Bk, and in the embodiment of fig. 5 they are found based on the soft values (L-values) and the overall generator matrix G = GCRC GLDPC that generates the concatenated code. The hard decision values for the in the positions Bk constitute a vector u(Bk), for which there is the corresponding permuted vector u”(Bk).

Block 503 illustrates how the at least one processor is configured to re-encode bits in said i:th set of MRIPs, thus producing an i:th set of candidate vectors. The re-encoding is performed using the matrix G sys on the sums of the permuted vector u”(Bk) with test patterns, and the candidate vectors can be also called candidate codewords. From these, the most optimal candidate vector or most optimal candidate codeword can be found for example by looking for the smallest Euclidean distance to r, as has been explained above.

Whether the most recently found most optimal candidate vector or most optimal candidate codeword is good enough is decided in the functionality represented by block 504 in fig. 5. For generality it can be said that the at least one processor is configured to compare not only the most optimal candidate vector but said i:th set of candidate vectors to one or more stopping criteria. Comparing only the most optimal candidate vector to said one or more stopping criteria is a specific case of this more general definition, and covered thereby. Or, according to an alternative aspect, finding the most optimal candidate vector among the i:th set of candidate vectors may be considered to constitute a part of comparing said i:th set of candidate vectors to one or more stopping criteria. One stopping criteria could be to stop the algorithm, prevent any further attempts of decoding this particular sequence, and declare decoding failure if the Euclidean distance to r of all codewords is larger than a rejection limit, because this would show that the attempted decoding diverges and is not likely to produce any meaningful result with further iterations.

The at least one processor may be configured to either accept one of said candidate vectors if at least one of said stopping criteria is met, as illustrated by the lower output from block 504 in fig. 5. The other alternative is to begin a new iteration of said iterative BP decoding if none of said stopping criteria is met, as illustrated by the upper output from block 504 in fig. 5. If the first alternative is taken, it may be said that the candidate vector in question is accepted as a received bit sequence, because it represents the best guess of the receiving device of what the transmitting device originally transmitted as the sequence v. There can be one or more stopping criteria. If there are more than one, the at least one processor may be configured to apply only one of them in turn, or to apply two or more stopping criteria jointly. Which of these alternatives is selected may depend on how the processor is programmed to operate. Examples of possible stopping criteria are discussed in the following, without signifying that they would be in any particular order, like order of preference for example.

As one example, the at least one processor may be configured to use a stopping criterion according to which further iterations are prevented if a maximum number of iteration rounds per sequence of said iterative BP decoding has been reached. Such a stopping criterion is effective in preventing the at least one processor from using excessive time in the decoding of a particular received sequence, for example if the receiving device is taking part in a real time or near-real-time communications connection that only allows certain maximum latency in processing.

As another example, the at least one processor may be configured to use a stopping criterion according to which further iterations are prevented if one of said candidate vectors is a codeword of the code generated by said generator matrix. Such a stopping criterion is reasonable, because the fact that a fully valid codeword has been found indicates a reasonable probability that this is indeed the codeword originally transmitted by the transmitting device, so no further iterations are needed.

As another example, the at least one processor may be configured to use a stopping criterion according to which further iterations are prevented if an optimum candidate vector of said i:th set of candidate vectors is the same as an optimum candidate vector of a previous iteration round. Such a stopping criterion is reasonable, because - taken the nature of the decoding algorithm - it is reasonable to assume that further iteration rounds would be unlikely to produce any more variation to the optimum candidate vector.

Fig. 6 illustrates an embodiment in which the re-encoding step is not performed for every iteration of the BP decoding, but only after every L:th iteration, where L is a positive integer. In other words, the at least one processor is configured to perform said re-encoding of bits in said i:th set of MRIPs only every L:th iteration round. The at least one processor is configured to go back to said iterative BP decoding for the i+l:th iteration round otherwise, as illustrated by the arrow from block 601 back to block 501. This embodiment is based on the insight that the reliability represented by the L-values that the BP decoder iterates upon can be improved on every iteration round, without having to go through the processing intensive re-encoding in between.

In yet another embodiment the at least one processor is configured to perform said re encoding of bits in said i:th set of MRIPs only at first S iterations per sequence, where S is a positive integer, and go back to said iterative BP decoding for the i+l:th iteration round otherwise. This embodiment can be combined with the embodiment described above for example so that for the first S iteration rounds the re-encoding is done on every round, but after that only at intervals of L rounds.

Fig. 7 illustrates an embodiment in which the generator matrix applied for finding the MRIPs in block 701 is a generator matrix for generating the inner code. Taken that the inner code was assumed to be an LDPC code, the generator matrix applied in block 701 is the matrix GLDPC. Consequently the matrix applied in re-encoding, represented by block 702, is the matrix G^ S .LDPC. The at least one processor is further configured to perform a cyclic redundancy check for the i:th set of candidate vectors before said comparing of said i:th set of candidate vectors to said one or more stopping criteria. This CRC checking is represented by block 703 in fig. 7. The at least one processor is configured to compare only those of said i:th set of candidate vectors to said one or more stopping criteria that pass said cyclic redundancy check. This embodiment involves the advantage that matrices involved in blocks 701 and 702 have smaller dimensions than if the overall generator matrix G = GCRC GLDPC of the concatenated code was used, which may simplify processing.

FIG. 8 illustrates a comparison of performance between three different decoding schemes. The densely dashed, leftmost graph 801 represents the performance of decoding an LDPC code concatenated with a CRC code, using an apparatus like that in fig. 5 for said decoding. The solid graph 802 represents the performance of decoding an LDPC code concatenated with a CRC code when an OSD decoding scheme of order 3 was used. The loosely dashed, rightmost graph 803 represents decoding an LDPC code with IRB-OSD of order 1.

A single (80, 40) LDPC code is used in the case of the rightmost graph 803, i.e. decoding with an IRB-OSD algorithm of order 1. In the concatenated coding scheme, a bit sequence of length 40 is first encoded by a CRC code of length 5 with the generator polynomial l+x 2 +x 5 , then encoded by a (80, 45) LDPC code that is proposed for use in 5G wireless communications. The overall code rate for the concatenated code is thus ½, which is equal to the one with the single LDPC code. For the IRB-OSD in the concatenated coding scheme, 50 maximum iterations is simulated here without stopping criterion in order to obtain the full potential of the scheme. We note that compared to the single LDPC under IRB-OSD of 50 iterations (graph 803), the IRB-OSD in concatenated scheme (graph 801) can achieve a coding gain of 0.8dB around BLER=10 3 . With respect to the performance of concatenated decoding under the OSD of order 3 (graph 802) it can achieve a coding gain 0.2dB. Note that the number of re-encodings for OSD of order 3 is 10701, which is much more than 64, the number of re-encodings for IRB-OSD of order 1.

Fig. 9 illustrates an example of an apparatus 901 for decoding a received transmission that was encoded with a concatenated code including an inner code and an outer code. The apparatus 901 comprises a processor 902 that is configured to perform operations of the kind that have been described earlier in this text. An example of configuring the processor 902 for such operation is to provide a storage 903 that is available to the processor 902, for example in the form of one or more memory circuits and/or one or more memory mediums, on which there are stored one or more sets of one or more machine-executable instructions 904. The provision of such instructions may be called programming the processor 902. The storage 903 and the instructions 904 are shown in fig. 9 both as a single block, but this is just a graphical notation and does not exclude providing two or more storages 903, in which the instructions 904 can be distributed in various ways. The apparatus may comprise a communications interface 905 for exchanging signals, data, and information with other devices. The apparatus 901 may use the communications interface 905 to receive the signals for decoding and/or to output the coding results, for example. The apparatus 901 may also use the storage 903 for storing at least a part of the signals for decoding and/or the coding results.

Those skilled in the art should understand that each block or step of the methods disclosed herein, or any combinations of the blocks or steps, can be implemented by various means, such as hardware, firmware, and/or software. As an example, one or more of the blocks or steps described above can be embodied by computer executable instructions, data structures, program modules, and other suitable data representations. Furthermore, the computer executable instructions which embody the blocks or steps described above can be stored on a corresponding data carrier and executed by at least one processor like the processor 902 of the apparatus 901. This data carrier can be implemented as any computer-readable storage medium configured to be readable by said at least one processor to execute the computer executable instructions. Such computer-readable storage media can include both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, the computer-readable media comprise media implemented in any method or technology suitable for storing information. In more detail, the practical examples of the computer-readable media include, but are not limited to information-delivery media, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile discs (DVD), holographic media or other optical disc storage, magnetic tape, magnetic cassettes, magnetic disk storage, and other magnetic storage devices.

Although the exemplary embodiments of the present disclosure are described herein, it should be noted that any various changes and modifications could be made in the embodiments of the present disclosure, without departing from the scope of legal protection which is defined by the appended claims. In the appended claims, the word“comprising” does not exclude other elements or steps, and the indefinite article“a” or“an” does not exclude a plurality. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.