Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DECODING OF LOW-DENSITY PARITY-CHECK CONVOLUTIONAL TURBO CODES
Document Type and Number:
WIPO Patent Application WO/2019/042543
Kind Code:
A1
Abstract:
Provided is a procedure for decoding low-density parity-check convolutional turbo codes, LDPC-CTC, the LDPC-CTC being constructed from parallel LDPC-CC. The decoding procedure based on slots or windows (sliding window decoding) involves determining, based on a received signal, statistical values, e.g. LLRs, corresponding to information and parity bits of an encoded information bit sequence. If a number of determined statistical values equals or exceeds a threshold, a new decoding iteration is started, wherein a decoding iteration includes updating the determined statistical values based on actual determined statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder. A stored statistical value stored in the memory is replaced with an updated statistical value, wherein the updated statistical value is kept in the memory for a number of succeeding decoding iterations. The information bits to which the updated stored statistical values correspond are decoded. If the latency/memory requirements are to be adpated according to circumstances/transmission requirement, the decoder can modify the memory depth.

Inventors:
MANFREDI, Marco (Riesstr. 25, Munich, 80992, DE)
CANNALIRE, Giacomo (Riesstr. 25, Munich, 80992, DE)
MAZZUCCO, Christian (Riesstr. 25, Munich, 80992, DE)
Application Number:
EP2017/071793
Publication Date:
March 07, 2019
Filing Date:
August 30, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HUAWEI TECHNOLOGIES CO., LTD. (Huawei Administration Building Bantian Longgang District, Shenzhen, Guangdong 9, 518129, CN)
MANFREDI, Marco (Riesstr. 25, Munich, 80992, DE)
International Classes:
H03M13/11; H03M13/29
Foreign References:
US20130254633A12013-09-26
Other References:
CORAZZA G E ET AL: "Latency constrained protograph-based LDPC convolutional codes", PROC., 6TH IEEE INTERNATIONAL SYMPOSIUM ON TURBO CODES AND ITERATIVE INFORMATION PROCESSING, ISTC 2010, 6 September 2010 (2010-09-06), pages 6 - 10, XP031783822, ISBN: 978-1-4244-6744-0
PUSANE A E ET AL: "Implementation aspects of LDPC convolutional codes", IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, NJ. USA, vol. 56, no. 7, 1 July 2008 (2008-07-01), pages 1060 - 1069, XP011231533, ISSN: 0090-6778, DOI: 10.1109/TCOMM.2008.050519
ZHENGANG CHEN ET AL: "High Throughput Parallel Decoder Design for LDPC Convolutional Codes", PROC., 4TH IEEE INTERNATIONAL CONFERENCE ON CIRCUITS AND SYSTEMS FOR COMMUNICATIONS, ICCSC 2008, 26 May 2008 (2008-05-26), pages 35 - 39, XP031268649, ISBN: 978-1-4244-1707-0
ERAN PISEK ET AL: "Capacity-Approaching TQC-LDPC Convolutional Codes Enabling Power-Efficient Decoders", IEEE TRANSACTIONS ON COMMUNICATIONS, 10 October 2016 (2016-10-10), pages 1 - 13, XP055449345, Retrieved from the Internet [retrieved on 20180208], DOI: 10.1109/TCOMM.2016.2616142
PISEK ERAN ET AL: "Trellis-Based QC-LDPC Convolutional Codes Enabling Low Power Decoders", IEEE TRANSACTIONS ON COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, NJ. USA, vol. 63, no. 6, 1 June 2015 (2015-06-01), pages 1939 - 1951, XP011584363, ISSN: 0090-6778, [retrieved on 20150612], DOI: 10.1109/TCOMM.2015.2424434
A. J. FELSTROM; K. ZIGANGIROV: "Time-varying periodic convolutional codes with low density parity-check matrix", IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 45, no. 6, September 1999 (1999-09-01), pages 2181 - 2191, XP011027437
D. J. C. MACKAY: "Good error-correcting codes based on very sparse matrices", IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 45, no. 2, March 1999 (1999-03-01), pages 399 - 431, XP002143042, DOI: doi:10.1109/18.748992
M. MANSOUR; N. SHAMBHAG: "Memory-efficient turbo decoder architectures for LDPC codes", IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS, October 2002 (2002-10-01), pages 159 - 164, XP010616594, DOI: doi:10.1109/SIPS.2002.1049702
T. KISHIGAMI, Y. MURAKAMI, AND I. YOSHII,: "LDPC Convolutional Codes for IEEE 802.16m FEC Scheme", IEEE 802.16
Attorney, Agent or Firm:
KREUZ, Georg (Huawei Technologies Duesseldorf GmbH, Riesstr. 8, Munich, 80992, DE)
Download PDF:
Claims:
CLAIMS

1. A decoder for decoding a low-density parity-check convolutional code, LDPC-CTC, the decoder being configured to: determine, based on a received signal, statistical values corresponding to information and parity bits of an encoded information bit sequence; start a new decoding iteration if a number of determined statistical values equals or exceeds a threshold, wherein a decoding iteration includes updating the determined statistical values based on actual determined statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder; replace a stored statistical value stored in the memory with an updated statistical value, wherein the updated statistical value is kept in the memory for a number of succeeding decoding iterations; and decode the information bits to which the updated stored statistical values correspond.

2. The decoder of claim 1, wherein the decoder is further configured to: remove/overwrite the updated stored statistical values from/in the memory of the decoder after the succeeding decoding iterations have been performed and the information bits corresponding to the updated stored statistical value have been decoded.

3. The decoder of claim 1 or 2, wherein the decoder is further configured to: adapt the number of succeeding decoding operations for which the updated statistical values are kept in the memory of the decoder.

4. The decoder of claim 3, wherein the decoder is further configured to: decrease the number of succeeding decoding operations for which the updated statistical values are kept in the memory of the decoder in response to a request to reduce a latency of the decoding operation. 5. The decoder of claim 3, wherein the decoder is further configured to: increase the number of succeeding decoding operations for which the updated statistical values are kept in the memory of the decoder in response to a request to reduce an error rate of the decoding operation.

6. The decoder of any one of claims 1 to 5, wherein the threshold equals the number of information and parity bits of one layer of the LDPC-CTC.

7. The decoder of any one of claims 1 to 6, wherein updating the statistical values based on the statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder includes parallelly updating different sets of statistical values assigned to different parity bits. 8. A method of decoding a low-density parity-check convolutional code, LDPC-CTC, the method comprising: determining, based on a received signal, statistical values corresponding to information and parity bits of an encoded information bit sequence; starting a new decoding iteration if a number of determined statistical values equals or exceeds a threshold, wherein a decoding iteration includes updating the determined statistical values based on actual determined statistical values and statistical values from a previous decoding iteration; replacing a stored statistical value stored in the memory with an updated statistical value, wherein the updated statistical value is kept in the memory for a number of succeeding decoding iterations; and decoding the information bits to which the updated stored statistical values correspond.

9. The method of claim 8, further comprising: removing/overwriting the updated stored statistical values from/in the memory after the succeeding decoding iterations have been performed and the information bits corresponding to the updated stored statistical value have been decoded.

10. The method of claim 8 or 9, further comprising: adapting the number of succeeding decoding operations for which the updated statistical values are kept in the memory.

11. The method of claim 10, further comprising: decreasing the number of succeeding decoding operations for which the updated statistical values are kept in the memory in response to a request to reduce a latency of the decoding operation.

12. The method of claim 10, further comprising: increasing the number of succeeding decoding operations for which the updated statistical values are kept in the memory in response to a request to reduce an error rate of the decoding operation.

13. The method of any one of claims 8 to 12, wherein the threshold equals the number of information and parity bits of one layer of the LDPC-CTC.

14. The method of any one of claims 8 to 13, wherein updating the statistical values based on the statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder includes parallelly updating different sets of statistical values assigned to different parity bits.

15. A machine-readable medium comprising machine-readable instructions which when carried-out by the machine cause the machine to perform the method of any one of claims 8 to 14.

Description:
DECODING OF LOW-DENSITY PARITY-CHECK CONVOLUTIONAL TURBO CODES

FIELD The present disclosure relates to low-density parity-check convolutional turbo codes (LDPC- CTCs). In particular, the present disclosure relates to a procedure for decoding LDPC-CTCs.

BACKGROUND

LDPC-CCs suggested by A. J. Felstrom and K. Zigangirov, "Time-varying periodic convolutional codes with low density parity-check matrix", IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 45, No. 6, pp. 2181-2191, Sept. 1999, are based on a binary parity-check matrix which has a column weight that is not equal to ' 1 ' . For this reason, it may be unfeasible to employ the parallel layered decoding algorithm already used for LDPC block codes (LDPC-BCs), as there can be conflicts regarding the reading and writing of the received signal vector. Rather, decoding of LDPC-CCs suggested by A. J. Felstrom and K. Zigangirov may be carried- out based on:

• serial layered decoding, which decodes the same information sequence, and parallel layered decoding, which decodes different information sequences (this provides for good performance/fast convergence of the belief propagation algorithm, but suffers from high decoding latency); or

• parallel layered decoding, which decodes the same information sequence (this provides for low decoding latency but does not achieve good performance - slow convergence of the belief propagation algorithm).

Accordingly, there is a demand for more advanced decoding procedures. SUMMARY

In the following, an improved procedure for decoding LDPC-CTCs is described. To exploit the small structural latency of LDPC-CTC, the inner core of the decoder may be based on a simplified version of the standard belief propagation algorithm (BPA) (which is usually adopted in LDPC-BC), but the management of the IN-OUT memories may be modified.

E.g., if a time slot (TS) is the period to receive a block of wc (rate=b/c) bits (where w is the number of "parallel" LDPC-CCs on which the LDPC-CTC is based), a single decoding iteration may be performed starting from the last new log likelihood ratios (LLRs) acquired in the current TS and involve updating all LLRs up to the last LLRs memorized (e.g., LLRs of several code periods, depending on the memory depth). As the number of rows to iterate is fixed, the processing time is fixed. Old LLRs in the memory are iterated more times and may be dropped out from the memory according to latency/performance as required.

In this decoding procedure, which may be referred to as a "sliding-window" decoding procedure, the latency of the decoding process can be modified by the decoder in a way that is transparent to the encoder and the size of the memory used during decoding can be adapted by changing the number of iterations LLRs that are kept in the memory.

According to a first aspect of the present invention, there is provided a decoder for decoding a LDPC-CTC, the decoder being configured to determine, based on a received signal, statistical values corresponding to information and parity bits of an encoded information bit sequence, start a new decoding iteration if a number of determined statistical values equals or exceeds a threshold, wherein a decoding iteration includes updating the determined statistical values based on actual determined statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder, replace a stored statistical value stored in the memory with an updated statistical value, wherein the updated statistical value is kept in the memory for a number of succeeding decoding iterations, and decode the information bits to which the updated stored statistical values correspond.

In this regard, it is noted that the term "decoder" as used throughout the description and claims in particular refers to hardware, or a combination of hardware and software which implements a decoding procedure. In particular the decoder may include a processor or a processing circuit configured to perform the decoding operations or steps. Moreover, the term "received signal" as used throughout the description and claims in particular refers to a channel output, e.g., an electromagnetic signal such as a modulated carrier wave. Furthermore, the term "statistical value" as used throughout the description and claims in particular refers to reliability values corresponding to the received information and parity bits, wherein a reliability value (e.g., a likelihood, likelihood ratio, or log likelihood ratio) indicates the reliability of making a correct decision if mapping (a part of) the received signal to a particular bit value.

This "sliding-window" decoding procedure provides for low latency and reduces memory requirements. In fact, the latency of the decoding procedure can be instantly/independently modified by the decoder (during decoding) based on performance considerations, without a need to provide feedback/communicate a change of the encoding matrix to the encoder. Moreover, unlike known decoding procedures for LDPC-BCs, the size of the parity check matrix of the LDPC-CTC is substantially independent of code-word length. Furthermore, these benefits can be achieved without (substantially) increasing the hardware complexity of the decoder as compared to a decoder for LDPC BCs.

In a first possible implementation form of the decoder according to the first aspect, the decoder is further configured to remove/overwrite the updated stored statistical values from/in the memory of the decoder after the succeeding decoding iterations have been performed and the information bits corresponding to the updated stored statistical value have been decoded.

Hence, the overall need for memory space can be reduced by re-using memory space freed from updated stored statistical values that drop out of memory.

In a second possible implementation form of the decoder according to the first aspect, the decoder is further configured to adapt the number of succeeding decoding operations for which the updated statistical values are kept in the memory of the decoder.

Hence, the usage of memory space can be reduced by reducing the number of TSs for which statistical values are kept in the memory. In a third possible implementation form of the decoder according to the first aspect, the decoder is further configured to decrease the number of succeeding decoding operations for which the updated statistical values are kept in the memory of the decoder in response to a request to reduce a latency of the decoding operation. Thus, the latency of the decoding procedure can be reduced.

In a fourth possible implementation form of the decoder according to the first aspect, the decoder is further configured to increase the number of succeeding decoding operations for which the updated statistical values are kept in the memory of the decoder in response to a request to reduce an error rate of the decoding operation. Thus, the performance of the decoding procedure can be improved.

In a fifth possible implementation form of the decoder according to the first aspect, the threshold equals the number of information and parity bits of one layer of the LDPC-CTC.

In this regard, it is noted that the term "layer" as used throughout the description and claims in particular refers to one row for each LDPC-CC matrix on which the LDPC-CTC is based. In a sixth possible implementation form of the decoder according to the first aspect, updating the statistical values based on the statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder includes parallelly updating different sets of statistical values assigned to different parity bits.

Hence, performance of the decoding procedure can be further improved. Furthermore, the implementation forms of the decoder may include, but are not limited to, one or more processor, one or more application specific integrated circuit (ASIC) and/or one or more field programmable gate array (FPGA). Implementation forms of the decoder may also include other conventional and/or customized hardware such as software programmable processors. According to a second aspect of the present invention, there is provided a method of decoding a LDPC-CTC the method comprising determining, based on a received signal, statistical values corresponding to information and parity bits of an encoded information bit sequence, starting a new decoding iteration if a number of determined statistical values equals or exceeds a threshold, wherein a decoding iteration includes updating the determined statistical values based on actual determined statistical values and statistical values from a previous decoding iteration, replacing a stored statistical value stored in the memory with an updated statistical value, wherein the updated statistical value is kept in the memory for a number of succeeding decoding iterations, and decoding the information bits to which the updated stored statistical values correspond.

Thus, as indicated above, the latency of the decoding procedure can be instantly/independently modified (during decoding) based on performance considerations, without a need to adapt the encoding procedure. Moreover, unlike known decoding procedures for LDPC-BCs, the size of the parity check matrix of the LDPC-CTC is substantially independent of code- word length. In a first possible implementation form of the method according to the second aspect, the method further comprises removing/overwriting the updated stored statistical values from/in the memory after the succeeding decoding iterations have been performed and the information bits corresponding to the updated stored statistical value have been decoded.

Hence, the overall need for memory space can be reduced by re-using memory space freed from updated stored statistical values that drop out of memory.

In a second possible implementation form of the method according to the second aspect, the method further comprises adapting the number of succeeding decoding operations for which the updated statistical values are kept in the memory.

Hence, the usage of memory space can be reduced by reducing the number of decoding iterations for which statistical values are kept in the memory.

In a third possible implementation form of the method according to the second aspect, the method further comprises decreasing the number of succeeding decoding operations for which the updated statistical values are kept in the memory in response to a request to reduce a latency of the decoding operation.

Thus, the latency introduced by decoding can be reduced.

In a fourth possible implementation form of the method according to the second aspect, the method further comprises increasing the number of succeeding decoding operations for which the updated statistical values are kept in the memory in response to a request to reduce an error rate of the decoding operation.

Thus, decoding performance can be improved.

In a fifth possible implementation form of the method according to the second aspect, the threshold equals the number of information and parity bits of one layer of the LDPC-CTC.

Thus, a decoding operation starts as soon as all bits of the first layer are received.

In a sixth possible implementation form of the method according to the second aspect, updating the statistical values based on the statistical values and statistical values from a previous decoding iteration stored in a memory of the decoder includes parallelly updating different sets of statistical values assigned to different parity bits.

Hence, decoding performance can be further improved.

Moreover, it will be appreciated that method steps and decoder features may be interchanged in many ways. In particular, the features of the disclosed decoder can be part of the method and the method steps can be implemented by the decoder. According to a third aspect of the present invention, there is provided a machine-readable medium comprising machine-readable instructions which when carried-out by the machine cause the machine to perform the method according to a second aspect or any one of the implementation forms of the method according to a second aspect. BRIEF DESCRIPTION OF THE DRAWINGS

Fig. 1 illustrates horizontal scheduling on a simplex matrix.

Fig. 2 shows a block diagram of a calculus or decoding unit for a turbo decoding message passing procedure based on the normalized min sum algorithm. Fig. 3 shows a block diagram of a digital communication system in which the decoding procedure of the present invention may be implemented.

Fig. 4 illustrates an exemplary encoding procedure which may be performed by an LDPC-CTC encoder of the digital communication system of Fig. 3.

Fig. 5 illustrates an example of the encoding procedure for an LDPC-CTC with rate R = k i /n c = 3/4 .

Fig. 6 shows an LDPC-CTC seed matrix portion according to the example of Fig. 5. Fig. 7 illustrates a block-based decoding procedure. Fig. 8 illustrates a slot-based decoding procedure.

Fig. 9 illustrates a "sliding-window" decoding procedure which may be performed by an LDPC- CTC decoder of the digital communication system of Fig. 3.

Fig. 10 shows a flow chart of steps of the decoding procedure.

DETAILED DESCRIPTION

The following provides a non-limiting example of an encoding/decoding procedure for LDPC- CTC codes with is described with reference to Fig. 3 to Fig. 12. To provide the stage for the following disclosure and introduce a notation coherent with the notation used in the prior art, the "sum-product" algorithm (SPA) proposed by D. J. C. MacKay, "Good error-correcting codes based on very sparse matrices," IEEE TRANSACTIONS ON INFORMATION THEORY, vol. 45, No. 2, pp. 399-431, Mar. 1999, and its low-complexity version "normalized min sum" algorithm (NMS) is described in the following with reference to Fig. 1 and Fig. 2.

In the notation introduced by MacKay (which is based on Tanner graphs), denotes the

message of the i - th bit node to the j - th check node and denotes the message of the

j - th check node to the i - th bit node, wherein the number of the check nodes is equal to the number r of the rows of parity matrix H , while the number of the bit nodes is equals to the number of the columns n .

In the logarithmic domain, the quantities of interest may be the following probability ratios:

Reliability information/values derived from the channel output:

· Check node value:

Bit node value:

WithR , i.e., the set of the indexes of the columns of the parity check matrix

H that, in the row / , have the value ' , and with indicating the set of the

indexes of the rows that, in the column j, have the value ' , the SPA algorithm may be summarized as in the followings lines:

with φ denoting being the parity check equation

of the code. If a decoded word does not satisfy the parity check equation, it can be assumed that the decoded word contains errors. With

• wr as the check node degree, which is equivalent to the weight of the rows of the matrix H , i.e., the number of ' Is' in the rows of H , and

• wc as the bit node degree, which is equivalent to the weight of the columns of the matrix H , i.e., the number of ' Is' in the column of H , the SPA requires the following memory structures:

• LQ of dimension 1xn (soft bit word);

• Lq of dimension wc x n (soft bit word); and

• Lr of dimension r x wr (soft bit word).

The soft bit words are words of nb bits and correspond to the likelihood messages, i.e., the probability ratios and may, for example, be up to 8 bits wide. There may be three different types of scheduling (wherein each type of scheduling defines an order in which the messages memorized in the memories are processed): flooding, horizontal and vertical.

Flooding scheduling updates all bit nodes first and then proceeds to update the check nodes. The phases, for each iteration, are shown in the following pseudo-code that comprises two phases:

Phase 1. for every row i: a) updating the probability equation related to the check node (reading from the memory Lq )

b) marginalization of the probability equation related to the check node (writing on memory Lr)

Phase 2. for every column j : a) updating the a-posteriori messages (updating the memory structure storing LQ )

b) updating the probability related to the bit node (updating the memory structure storing Lq)

The above equations are equivalent to the SPA original ones, but make the implicit calculation of the extrinsic information in the previous formulation of R j ( R j indicating the set of indexes where the message related to element that is under calculation is not considered) explicit. This is an important aspect of the used decoding algorithm and is usually called marginalization. Also important is that this way of calculating the extrinsic information requires less operations.

The complexity of the SPA can be reduced by using the following approximation:

I.e., as the result is dominated by the smallest value in the sum, the result can be approximated by calculating two absolute minimums between the modulus of the Lq messages. The Lq messages are those related to the parity equation of the row of the parity check matrix H that is processed. The necessity to calculate two minimums (instead of only one) comes from the fact that the information is always extrinsic. If, for example, the sequence of Lq messages to process is {4.0,5.1,5. 2,3.1,2.1, 3.3,1.4,6. 0,6.0 } , where the absolute minimum is in the seventh position, the output sequence will be {1.4,1.4,1. 4,1.4,1.4, 1.4,2.1,1. 4,1.4 } . The value in the seventh position is now the second absolute minimum. This simplification reduces performance to an extent that can be (sometimes completely) recovered by multiplying the output sequence by a scaling factor γ of about 0.7.

In this regard, it is important to note that it is not needed to know the variance of the Gaussian noise related to each LLR coming from the demapper.

Another important choice is the scheduling type. Reasons for using horizontal scheduling are:

• the memory saving, since the Lq memory is not required;

• a faster convergence, as for a given performance, the iterations are halved; and

• that it is suitable for a parallel architecture.

In this case, the decoding procedure has only one phase which combines the two phases (Phase 1 and Phase 2) for calculating the values Lq. As described by M. Mansour and N. Shambhag, "Memory-efficient turbo decoder architectures for LDPC codes", IEEE WORKSHOP ON SIGNAL PROCESSING SYSTEMS, pp. 159-164, Oct.2002, the "two phase message passing" (TPMP) can be replaced by "turbo decoding message passing" (TDMP). In brief, the TDMP is characterized by: a) the node bit phase is comprised in the check node phase:

b) during the processing of the current check-node all the connected bit-nodes are updated; and

c) the propagation of "always up-to-date" messages speeds up the convergence factor by two.

For example, looking at the matrix in Fig. 1, the processing of the third parity equation (cn 2 ) requires four bit-node messages where 3 of the 4 has been already updated in the processing of the two previous equations (i.e., 0 and ql in the updating of row 1, q6 in the updating of row 2). With l i denoting the subset of the column index , the pseudo-code then becomes:

for each iteration

for each row i of H

end

end

The terms LQ and Lr are vectors that are read from the memory structures LQ and Lr. If, for example, the weight of the i - th row is 22, LQ and Lr each comprises 22 elements (wherein each element is 8 bits wide), LQ(l i ) indicates that 22 (not adjacent) values have to be selected from the memory LQ based on I i as index (or address), which can be derived from row i of the parity check matrix H . With these 22 values, the Lq values are calculated which, after being processed by a NMS function, give the new values to update the memory Lr. Adding the previously obtained Lq to these Lr, the locations of LQ indicated by l i can be updated. The function that calculates the Lr = NMS (Lq ) is For efficiency reason, and for the architecture chosen, the multiplication of all the signs may be calculated and finally multiplied to extract the extrinsic information (marginalization). This is the analogy to the calculus of the extrinsic information using two minimums. At the end, the modulus is scaled by the factor gamma (/) . To give an overview, the functional blocks of a calculus or "decoding unit" (DU) for TDMP based on the NMS algorithm are depicted in Fig. 2.

Fig. 3 shows a block diagram of a digital communication system in which the decoding procedure of the present invention may be implemented (e.g., at the decoder CH_DEC of the system). As shown in Fig. 3, a binary word Al (information sequence) of k . bits may be encoded by a channel encoder CH_ENC that adds r = n c — k t parity bits (redundancy sequence) to the information sequence to obtain a binary word A2 (encoded sequence) of n c bits. The channel code -rate R is defined as the ratio between the number of information bits k i and the number of encoded bits n c , thus R — k i /n c .

The modulator MOD may transform the encoded vector A2 into a modulated signal vector CH_IN which is in turn transmitted through a channel CH which may, for example, correspond to a wired and/or wireless transmission path. The encoder CH_ENC and the modulator MOD may be comprised in an electronic device that performs the encoding/modulation procedure using customized hardware and/or a processor executing instructions stored in a machine- readable medium. Since the channel CH is usually subject to noisy disturbance NS, the channel output CH_OUT may differ from the channel input CH_IN. On the receiver side, the channel output CH_OUT may be processed by the demodulator DEM which may perform the MOD inverse operation (demodulation) and produce some likelihood ratio (soft bits). The channel decoder CH_DEC may use the redundancy in the received sequence A3 to correct errors in the information part of the received sequence A3 and produce a decoded signal A4 which is an information sequence estimate.

The encoder/decoder CH_ENC, CH_DEC may perform the encoding/decoding procedure based on a convolutional code with a LDPC matrix as proposed by A. J. Felstrom and K. Zigangirov (see above). As a consequence, the n c encoded bits output by the encoder

CH_ENC depend not only on the k i input bits received at the current iteration, but also on M previous input blocks (of k i input bits each).

The transposed parity-check matrix, called syndrome matrix, of a periodical binary convolutional code with memory M , code-rate R - k i /n c , code-period T = M— 1 and code-period number p is:

where each element with m = 0,1,2,..., M and t = 0,1,2,... is a binary sub-matrix of the

size n c x (n c - k t ).

For example, for a code -rate R = k i /n c = 2/3 , a code-period T = M - 1 = 7 - 1 = 6 and a code-period number p = 2 , each element of the transposed parity-check matrix is a

parity matrix of size n c x (n c - k i ) = 3 x 1 . Then, the syndrome matrix H T of equation (4) becomes:

The low-density parity-check matrix may be characterized by the following parameters:

• M determines the code period T = M— 1 ;

• Ju is the row weight (number of elements equal to ' 1 ' in the subrow) corresponding to the information sequence;

• Jv is the row weight (number of elements equal to ' 1 ' in the subrow) corresponding to the parity bits; and

• K is the column weight (number of elements equal to ' 1 ' in the column) starting from column

The information sequence and the encoded sequence can be expressed by equation

(6) and equation (7) as:

The received sequence will be correct, if equation (8) is satisfied:

(8)

For all information bits to be protected by corresponding parity bits, it is necessary to impose the following condition:

where n = 0,1,2,...

A systematic LDPC-CC encoder may carry-out a procedure corresponding to the following equations:

As shown by T. Kishigami, Y. Murakami, and I. Yoshii, "LDPC Convolutional Codes for IEEE 802.16m FEC Scheme", IEEE 802.16 Broadband Wireless Access Working Group, LDPC-CCs have advantages in terms of encoder complexity and decoder latency as compared to conventional FEC classes such as Convolutional Turbo Codes (CTC) and LDPC Block Codes (LDPC-BC). A binary parity-check matrix built as suggested by A. J. Felstrom and K. Zigangirov (see above) for code-rates R x = k n /n cl , code-period T = 5 , and code-period number p = 3 , may be composed of three code periods, wherein the first two code periods are complete and the last code period is incomplete. The first code period and the second code period can be repeated several times (to allow for variable code-word lengths) due to which the parity-check matrix may be called semi-infinite.

In order to improve the LDPC-CC, it can be feasible to apply a "tail-biting" operation that masks the parity-check matrix columns with lower weight which correspond to the last code period. In this last code period, information bits are encoded equal to '0' (which are not transmitted), to generate the corresponding parity bits, which are transmitted.

The binary parity-check matrix construction for LDPC-CC as suggested by A. J. Felstrom and K. Zigangirov (see above):

• has no column weight equal to ' 1 ' ; and

• the row layered decoding, already used in LDPC-BC, cannot be used. Thus:

• a serial layered decoding, which decodes the same information sequence, achieves good performance (fast convergence of the belief propagation algorithm) but leads to high latency;

• a parallel layered decoding, which decodes the same information sequence, leads to weak performance (slow convergence of the belief propagation algorithm) but achieves low latency; and

• a parallel layered decoding, which decodes a different information sequence, achieves good performance (fast convergence of the belief propagation algorithm) but leads to high latency.

Besides, the binary parity-check matrix construction for LDPC-CC as suggested by A. J. Felstrom and K. Zigangirov (see above) may suffer from cycles of length 4 which lead to performance degradation of the LDPC-CC. The proposed encoding/decoding procedure for LDPC-CTC codes overcomes the aforesaid drawbacks as it constructs the seed matrix for LDPC-CTC by interlacing (two or more) binary parity-check matrixes of LDPC-CCs, wherein each LDPC-CC may be based on the construction procedure proposed by A. J. Felstrom and K. Zigangirov (see above), with (or without) concatenated parity part.

A seed matrix for a specific rate R masked with a full exponent matrix E for the rate R and expanded with a spreading matrix (cyclic permutation matrix) may produce the binary parity- check matrix H rxn for the LDPC-CTC which leads to significant performance gains and allows carrying-out parallel layered decoding with good performance (fast convergence of the BPA) and low decoding latency. For example, the seed matrix may be of the form:

In this regard, it is noted that the LDPC-CTC does not require an interleaver circuit, because this operation can be performed by spreading matrixes with different exponent value for each parallel path (two or more). Moreover, there exists a tradeoff between the desired performance and the number of parallel concatenated LDPC-CCs (hardware complexity). The spreading matrix is a square binary matrix (circulant permutation matrix) of weight equal to T and size f xZ f being Z f the spreading factor which is tied to the code- word length of the carried-out LDPC code. For example, if the spreading factor is Z f = 48 then the full exponent matrix E will have elements (exponents) between 0 and (Z f -1) = 47.

The exponent matrix elements with values between 0 and (Z f -1) may thus represent cyclic permutation matrixes of the identity matrix (with weight equal to ' 1 ' for each row and for each column) with size Z f xZ f . The value of the exponent matrix element ma indicate the cyclic shift to the right of the identity matrix, as shown in the matrixes below. All

non-negative exponent matrix elements may correspond to circulant permutation matrices, for example the zero exponent permutation matrix may correspond to the identity matrix:

The same benefits may be obtained if the cyclic shift is a cyclic shift to the left instead of to the right. Each element of the exponent matrix of value equal to may represent a null square

matrix of size Z f xZ f as shown in matrix

For example, the exponent matrix E may be of the form:

Table 2

Fig. 4 illustrates an example of an encoding procedure which may be performed by the encoder CH_ENC. The rate of the LDPC-CTC is given by:

The LDPC-CTC with rate R = k i / n c is constituted by w parallel LDPC-CC codes. The LDPC-CCs may be designed with the following rates:

Considering, for example, a LDPC-CTC with rate R = k i I n c = 3 / 4 and constituted by w = 4 parallel LDPC-CCs, the single LDPC-CCs have rates R 1, R 2 ,R 3 ,R 4 which may be computed using equation (12) as:

In the implementation example of Fig. 5, a LDPC-CTC with rate R = k t I n c = 3 / 4 is used, which is based on two ( w = 2 ) parallel LDPC-CCs, a first LDPC-CC with rate R i = k n I n c l = 6 / 7 and a second LDPC-CC with rate R 2 = k i 2 I n c 2 = 7 / 8 . As indicated above, the number of parallel LDPC-CCs may more than two and be chosen while taking into account the tradeoff between desired performance and hardware complexity. To build a LDPC-CTC seed matrix as shown in Table 1, the first LDPC-CC seed matrix with rate R 1 = 6 / 7 may be arranged to interlace with the second LDPC-CC seed matrix with rate R 2 = 7 / 8 . Arranging the first LDPC-CC seed matrix may comprise adding a (zero) column between consecutive sets of 7 columns of the matrix, wherein the added columns are dedicated to (and will be filled with one or more entries of) the parity sequences of the second LDPC-CC seed matrix with rate R 2 = 7 / 8 .

The entries of the rows of the first "arranged" LDPC-CC seed matrix with rate R x = 6 / 7 may be copied to the odd rows of the LDPC-CTC seed matrix of Table 1 while the entries of the rows of the second LDPC-CC seed matrix with rate R 2 = 7 / 8 may be copied to the even rows of the LDPC-CTC seed matrix of Table 1.

Fig. 6 shows a portion (consisting of 18 rows and 24 columns) of the LDPC-CTC seed matrix of Table 1, wherein the rows filled with entries of the first LDPC-CC seed matrix with rate R 1 = 6 / 7 and the rows filled with entries of the second LDPC-CC seed matrix with rate

R 2 = 7 / 8 have been highlighted with using different shades of gray. The following correspondence exists between the rows of the first/second LDPC-CC seed matrix and the rows of the LDPC-CTC seed matrix:

and so on.

Therefore, although the LDPC-CTC may be constructed from several LDPC-CCs placed in parallel form, the LDPC-CTC can be represented by a single seed matrix, a single exponent parity-check matrix and a single binary parity-check matrix, which may be used in the encoding/decoding procedures.

As for LDPC-BCs carried-out with semi-random technique, the exponent parity-check matrix of the LDPC-CTC may be used in the encoding procedure because it takes advantage of the cyclic permutation matrix features. In the decoding process, the binary parity-check matrix of the LDPC-CTC may be used.

For the first LDPC-CC, equation (7), which governs the encoding procedure, unchanged and, for a code-rate R 1 = 611 , can be expressed as:

For the second LDPC-CC with code rate R 2 = 1 1 8 , equation (7) becomes:

To carry-out the LDPC-CTC encoding procedure, equation (15) and equation (16) may be rearranged as:

Equation (17) allows to carry-out the LDPC-CTC encoding procedure with a code -rate R = R l R 2 = (6 / 7) - (7 / 8) = 3 / 4 .

Thus, the LDPC-CTC seed matrix in Table 1 may be masked with the full exponent matrix E exp in Table 2 for the rate R and expanded with a spreading matrix (cyclic permutation matrix) to produce a binary parity-check matrix H rxn of the LDPC-CTC which allows parallel layered decoding with good performance (fast convergence of the belief propagation algorithm) and low decoding latency.

In this regard, it is noted that the code-rate of a LDPC-CTC for tail-biting (TB -LDPC-CTC) is tied to the rate of the code R = k i I n c , the parallel LDPC-CC number w , the code period length T, the code period number p and the parity element number r = n c - k i .

The LDPC-CTC seed matrix of Table 1 masked with the full exponent matrix E of Table 2 has the following characteristics:

Exponent parity-check matrix column number

Information column number for tail-biting

Exponent parity-check matrix row number Exponent parity-check matrix size

The rate for the TB-LPCC-CTC becomes:

The LDPC-CTC seed matrix of Table 1 masked with the full exponent matrix E of Table 2 is periodic for rows and for columns. The periodicity of the rows is equal to the product

0 while the periodicity of the columns is equal to the product

Thus, it is not necessary to store the exponent parity-check matrix H exp and the

binary parity-check matrix H rxn in their real size, but it suffices to store w - r - T = 10 rows for the exponent matrix and w- r -T- Z f = 10- 48 = 480 rows for the binary matrix.

In order to construct the exponent matrix, using well-known construction rules, it may be operated on columns because this allows using 40 incremental steps (unlike the same operation for rows which would only allow using 10 incremental steps). By operating on columns, it may hence be possible to reduce the number of cycles with length 4 in the exponent parity matrix and consequently to improve the code performance. I.e., the increase of the number of incremental steps in the exponent parity matrix construction may allow to achieve an overlap factor (the scalar product between any two columns of the binary parity-check matrix H rxn ) less than or equal to T (≤1).

An overlap factor≤l makes the LDPC-CC robust, i.e. avoids cycles of length 4 which may degrade performance of the iterative decoding algorithm which reduces code performance.

In the encoding procedure, the exponent parity-check matrix H exp may be used to take advantage of the following properties: • the product between two permutation matrixes of the identity matrix with exponents a and β produces always a permutation matrix of the identity matrix with exponent

{a + β) ; and

• the product between a cyclic permutation matrix of the identity matrix with exponent a and a vector of size Z / produces the same vector of size Z / but with a cyclic shift of its elements equal to α .

In the decoding procedure, the binary parity-check matrix may be used, because the iterative algorithm updates the single reliability value which corresponds to the single bit.

With the block structured information sequence being denoted by and the block

structured encoded sequence being denoted by

the single elements are vectors of size equal to spreading factor Z f = 48 and the

received sequence will be correct, if the following equation will be satisfied:

The systematic block structured LDPC-CTC encoding procedure derived from equation (17) may be carried-out based on:

In the example, the exponent and

of the LDPC-CTC parity-check matrix are cyclic

permutation matrices of size Z f = 48 . The coefficients and

are tied to the information sequence while the coefficients and are tied to the parity sequence, respectively, of the first LDPC-CC and the second LDPC-CC. Each time slot n = 0,1,2,3,... corresponds to a block of bits (with

information bits and w- r - Z f = 96 parity bits).

The coefficients are always comprised between '0' and (Z f -1) = 47, but the

coefficients , which correspond to the parity columns of the exponent matrix,

may be set equal to '0' to minimize the hardware complexity of the LDPC-CTC encoding process. Hence, the coefficient may correspond to the identity matrix.

In other words, to reduce the number of arithmetic operations in the LDPC-CTC encoding procedure, the exponent parity-check matrix may be constructed by putting to '0' (cyclic permutation matrix equal to the identity matrix) the last exponent (rightmost element) of each row for all time slots n = 0,1,2,3,.... ) corresponding to the encoded parity

sequence.

As communication services may require different degrees of latency, according to circumstances/transmission requirements like real-time broadcasting, time sensitive service and fast responsiveness of networks for the tactile internet, the internet of things (IoT), etc. and the encoding/decoding procedures may represent the predominant reason for latency introduced to the system, a possibility to adapt the latency introduced by the encoding/decoding procedures may be beneficial.

Current systems usually support different requirements regarding latency and a dynamic change of latency by exchanging information between the decoder CH_DEC and the encoder CH_ENC. This exchange of control messages can be time consuming and produce overhead in the network which leads to low performance of the procedure.

The control messages may include information regarding a modified encoding matrix, i.e., an encoding matrix of another size that permits the encoder to reduce the length n ' of the code- word, where ri≠ n to obtain a (substantial) change in the latency. In case of block codes, the consecutive blocks are encoded independently of each other and decoding cannot start before the complete frame of„ bits is available at the input of the decoder. This approach, which is illustrated in Fig. 7, may be referred to as block-based decoding.

As with LDPC-CCs, the encoding procedure depends on several input blocks in accordance with the memory of the code. The input block length for convolutional codes may be, for example, in the order of a few information bits. The decoding can be performed as soon as the first parity bit is available at the decoder CH_DEC. This approach, which is illustrated in Fig. 8, may be referred to as slot-based decoding.

To perform a slot-based decoding procedure and to exploit the small structural latency of LDPC-CC, a modified decoding procedure is used. The inner core of the decoding procedure is based on a simplified version of the standard BPA but the IN-OUT memories are managed in a different manner.

As illustrated in Fig. 9, a single iteration is performed starting from the last new LLRs acquired in the current TS and decoding involves updating the statistical values up to the last LLRs memorized (wherein the memorized LLRs may correspond to several code periods depending on the memory depth). Old LLRs in the memory of the decoder CH_DEC are iterated several times and can be dropped out from the memory according to the desired latency/performance. The latency involved in carrying-out of the decoding procedure can be modified by the decoder CH_DEC without a need to inform the encoder CH_ENC, as no change of the encoding matrix is required and the size of the parity-check matrix of the LDPC-CTC is independent of the codeword length. Fig. 10 shows a flow chart of steps of the decoding procedure. After having received the channel output CH_OUT, the demodulator DEM (which may be integrated into the decoder CH_DEC or into separate device) may demodulate the channel output CH_OUT and the decoding procedure may start at the decoder CH_DEC by determining LLRs corresponding to the information and parity bits of the binary word A2, as indicated at step 10 of the flowchart shown in Fig. 10. The length of the binary word A2 may be equal to the length of a TS.

Once one or more LLRs for the parity bits of the binary word A2 have been determined, a new decoding iteration may be started as indicated in step 12 of the flowchart shown in Fig. 10. As shown in Fig. 9, the LLRs which are currently kept in the memory of the decoder CH_DEC may be updated (e.g., based on a turbo decoding procedure) and once the decoding operation has been completed, the sliding window may be advanced by one layer and the LLRs corresponding to the last layer that is dropped may be removed from the memory to provide space for LLRs of the new layer.

The LLRs of the to-be-dropped columns may be used to make a hard decision on the received information bits as indicated at step 16. If the latency/memory requirements are to be adapted, the memory depth may be modified as indicated in Fig. 9.