Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMMUNICATION UNIT AND METHOD OF DECODING
Document Type and Number:
WIPO Patent Application WO/2004/038928
Kind Code:
A1
Abstract:
A method (600, 800) for turbo decoding one or more data blocks. The method includes the steps of receiving (602, 802) one or more data blocks in a plurality of time slots at a communication unit. At least one Backward Processor computes (625) backward path metrics for a plurality of data slots and stores the backward path metrics in a storage element. A Forward Processor computes (645, 835) forward path metrics for the plurality of data slots. A data block determination function, calculates and outputs (648, 838) decoded data for the data blocks based on the forward path metrics and the stored backward path metrics. By storing backward path metrics in a turbo decoding operation, the data block determination function, for example an a-posteriori probabilities module, calculates and outputs decoded data using reduced storage space when compared to known techniques of storing forward path metrics. There is an advantage in delay over known 'sliding window' decoding techniques.

Inventors:
KESSELMAN KEREN (ES)
BEN-ZUR AVI (IL)
Application Number:
PCT/EP2003/011570
Publication Date:
May 06, 2004
Filing Date:
October 17, 2003
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MODEM ART LTD (IL)
KESSELMAN KEREN (ES)
BEN-ZUR AVI (IL)
International Classes:
H03M13/29; (IPC1-7): H03M13/29
Foreign References:
GB2365291A2002-02-13
US20020174401A12002-11-21
EP1115209A12001-07-11
US5933462A1999-08-03
Other References:
VITERBI A J: "AN INTUITIVE JUSITIFICATION AND A SIMPLIFIED IMPLEMENTATION OF THE MAP DECODER FOR CONVOLUTIONAL CODES", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE INC. NEW YORK, US, vol. 16, no. 2, 1 February 1998 (1998-02-01), pages 260 - 264, XP000741780, ISSN: 0733-8716
Attorney, Agent or Firm:
Wray, Antony John (Grove House Lutyens Close, Chineham Cour, Basingstoke Hampshire RG24 8AG, GB)
Download PDF:
Claims:
Claims
1. A method (600,800) for turbo decoding one or more data blocks in a communication unit, the method comprising the steps of : receiving (602, 802) one or more data blocks in a plurality of time slots at said communication unit; computing (625), by at least one Backward Processor in said communication unit, backward path metrics for said plurality of time slots; computing (645, 835), by a Forward Processor, forward path metrics for said plurality of time slots; and determining, by a data block determination function, decoded data for said plurality of time slots based on said computed forward path metrics and said computed backward path metrics; the method characterised by the steps of: storing only said backward path metrics in a storage element (516); retrieving said stored backward path metrics, by said data block determination function; and outputting (648,838) decoded data blocks based on said computed forward path metrics and said stored backward path metrics substantially as said forward path metrics are computed.
2. The method (600,800) for turbo decoding one or more data blocks according to Claim 1, the method further characterised by the step of: deleting (658,848), by said Backward Processor, a previously stored backward path metric, prior to steps of: computing a new backward path metric; and storing said new backward path metric in said storage element (516), thereby reducing memory required to store said backward path metrics in said storage element (516).
3. The method (600, 800) for turbo decoding one or more data blocks according to Claim 1, wherein said step of computing backward path metrics is performed by a first Backward Processor operating in a first mode of operation and a second Backward Processor operating in a second mode of operation.
4. The method (600, 800) for turbo decoding one or more data blocks according to Claim 3, wherein said step of computing backward path metrics in a first mode of operation by said first Backward Processor comprises calculating backward path metrics to initialise said second Backward Processor in a subsequent time slot.
5. The method (600,800) for turbo decoding one or more data blocks according to Claim 3, wherein said step of computing backward path metrics in a second mode of operation, by said second Backward Processor, includes said steps of computing backward path metrics and storing said computed backward path metrics.
6. The method (600) for turbo decoding one or more data blocks according to Claim 4, further characterised by the steps of: calculating said backward path metrics used for initialisation in a first sequence of time slots; and computing and storing said backward path metrics in a second sequence of time slots.
7. The method (600) for turbo decoding one or more data blocks according to Claim 6, further characterised in that the first and second sequence of time slots are odd and even time slots, respectively.
8. The method (600) for turbo decoding one or more data blocks according to Claim 6, the method further characterised by the step of: alternating, by said first Backward Processor and second Backward Processor, between said step of calculating backward path metrics used for initialisation and said steps of computing and storing backward path metrics.
9. The method (800) for turbo decoding one or more data blocks according to Claim 3, further characterised by the steps of: configuring said first Backward Processor to continuously calculate said backward path metrics used for initialisation in said first mode of operation ; and configuring said second Backward Processor to continuously compute and store said backward path metrics in said second mode of operation.
10. The method (800) for turbo decoding one or more data blocks according to Claim 6 or Claim 9, wherein the step of preparing includes: providing, by said first Backward Processor, correct initialisation to enable said second Backward Processor compute backward path metrics for a whole data slot (Sm), for example in a sliding window manner.
11. The method (600,800) for turbo decoding one or more data blocks according to Claim 1, wherein said steps are performed in a base transceiver station, such as a NodeB, or a subscriber unit, such as a user equipment, for example configured for operation on a UMTS or a 3GPP wireless communication system.
12. A data communications system having one or more elements adapted to support the method of Claim 1.
13. The data communication system according to Claim 12, wherein the data communication system is a 3GPP or CDMA2000 or UMTS wireless communication system.
14. A communication unit (500) adapted to support the method of Claim 1.
15. The communications unit (500) according to Claim 14, wherein the communications unit (500) is a NodeB or user equipment configured for operation on a UMTS or a 3GPP or a CDMA 2000 wireless communication system.
16. A storage medium storing processorimplementable instructions for controlling a processor to carry out the method of Claim 1.
17. A communication unit (500) for turbo decoding one or more data blocks, the communication unit (500) comprising : a receiver receiving one or more data blocks in a plurality of time slots ; a storage element (516), operably coupled to said receiver ; a turbo decoder (509), operably coupled to said receiver and said storage element (516), said turbo decoder (509) comprising: at least one Backward Processor computing backward path metrics for said one or more data blocks ; a Forward Processor computing forward path metrics for said one or more data blocks; and a data block determination function determining and outputting decoded data based on said forward path metrics and said backward path metrics ; wherein the communication unit (500) is characterised by: only said at least one Backward Processor stores metrics in said storage element (516) such that said data block determination function decodes and outputs data based on said computed forward path metrics and said stored backward path metrics substantially as said forward path metrics are computed.
18. The communication unit (500) according to Claim 17, the communication unit (500) further characterised by said data block determination function determining decoded data based on said forward path metrics and said backward path metrics stored in a most recent time slot.
19. The communication unit (500) according to Claim 17, the communication unit (500) further characterised by said Backward Processor deleting a previously stored backward path metric from said storage element (516), prior to computing a new backward path metric, and storing said new backward path metric in said storage element (516) thereby minimising an amount of memory required to store said backward path metrics in said storage element (516).
20. The communication unit (500) according to Claim 17, further characterised by said turbo decoder (509) comprising a first Backward Processor operating in a first mode of operation and a second Backward Processor, operably coupled to said first Backward Processor, operating in a second mode of operation.
21. The communication unit (500) according to Claim 20, further characterised by said first mode of operation being a warmup mode of operation and said second mode of operation is an active mode of operation whereby said first Backward Processor provides initialisation of said second Backward Processor to enable said second Backward Processor to compute and store backward path metrics in said storage element (516) in a subsequent time slot.
22. The communication unit (500) according to Claim 20, further characterised in that said first mode of operation is performed in a first sequence of time slots, and said second mode of operation is performed in a second sequence of time slots.
23. The communication unit (500) according to Claim 22, further characterised by said first and second sequence of time slots being odd and even time slots, respectively.
24. The communication unit (500) according to Claim 20, wherein said first Backward Processor and said second Backward Processor alternate between said first mode of operation and said second mode of operation.
25. The communication unit (500) according to Claim 20, wherein said first Backward Processor is configured to continuously prepare said metrics to initialise said second Backward Processor in a first warmup mode of operation and said second Backward Processor is configured to continuously compute and store said backward path metrics in said second mode of operation, such that only said second Backward Processor stores backward path metrics.
26. The communication unit (500) according to Claim 17, wherein said data block determination function is an aposteriori probability Calculator.
27. The communication unit (500) according to Claim 17, wherein said communication unit (300) is a base transceiver station, such as a NodeB or a subscriber unit, such as a user equipment, for example configured for operation on a UMTS or a 3GPP or a CDMA 2000 communication system.
28. An integrated circuit performing the decoding method steps of Claim 1.
29. An integrated circuit used in the communication unit of Claim 17.
Description:
COMMUNICATION UNIT AND METHOD OF DECODING Field of the Invention This invention relates to communication units employing coding techniques. The invention is applicable to, but not limited to, a third generation communication unit utilising a turbo coding technique.

Background of the Invention Third generation (3G) wireless communication systems, for example the third generation partnership project (3GPP), have as a requirement the support of speech and data. In encoding and decoding data signals, a 3G-subscriber unit, termed user equipment (UE) in universal mobile telecommunication system (UMTS) parlance, needs to store "soft"samples of data. In this context, the expression "soft samples"may be understood to encompass samples that are stored and contain information about the perturbations met by the data symbols during the signal transmission, such as thermal noise, fading, etc. For accurate representation of the received signal, the soft samples need to be quantized with a large number of bits per sample. Thus, the UE receiver's memory requirements are typically substantial. Of course, the same consideration applies to any element in the communication system that decodes data, for example a base station (or node B in UMTS parlance).

Within 3G systems, different types of data services will be available with different Quality of Service (QoS) requirements, particularly in terms of error rate and

delay. Typically, data services can be divided into two groups, real-time and non-real-time services. The real- time services typically require a low average data block delay, arranged such that they can cope with a certain error rate. In contrast, non-real-time services typically require some form of limitation against delay, but often need a very low error rate.

Real-time services have been traditionally used only with forward error correction (FEC) coding mechanisms that guarantee a bounded delay, but deliver data with an error rate dependent upon channel conditions (which in wireless communication channels is highly variable). However, although an FEC mechanism guarantees a low delay and low delay variation, it cannot guarantee a low error rate.

When the error-rate is high, even with a low delay, the service is not delivered.

Conversely, non-real-time services usually tolerate variable and high delays but cannot cope with a high error rate. However, when delays become too high, the risk of the communication link being broken is increased.

In the field of this invention, coding techniques are commonly applied in communication systems to overcome various impairments induced by the channel. In particular, convolutional codes are well known and are widely used. Recently, an improved coding technique has appeared, referred to as turbo-coding'. This coding technique provides very good performance for acceptable complexity and delay and therefore is superior to classical coding schemes.

FIG. 1 illustrates a generic block diagram of a"turbo encoder"100. The turbo encoder 100 is comprised of two constituent encoders 120,125 and an interleaver 115.

The constituent encoders 120,125 are commonly convolutional encoders. The input to the first encoder is a block of K'information bits 105. In principle, more than two constituent encoders may be used, with each additional encoder preceded by an interleaver.

The turbo encoder outputs three types of bits: (i) The block of information bits, which are transmitted as"non-coded"bits. These bits are commonly referred to as"systematic bits"110.

(ii) The block of bits 130 generated by the first constituent encoder 120.

(iii) The block of bits 135 generated by the second constituent encoder 125.

The 2*K bits (from B'and C') generated by the two constituent encoders 120, 125 are commonly referred to as "parity bits"130,135.

The fundamental feature of the turbo encoder is that the second constituent encoder 125 operates on the bits presented at the input to the turbo encoder after the interleaver 115 has processed the data block. Thus, the information bits 105 are encoded twice: once in the original order and once after permutation by the interleaver 115. The design of the interleaver 115 is of paramount significance as the features of the interleaver 115 have a large effect on the performance of the turbo- coding scheme.

A parallel-to-serial device may multiplex the systematic bits 110 and the parity bits 130,135 at the output of the turbo encoder before they are transmitted over the channel. The coding rate of the generic scheme in FIG. 1 is, for example, 1/3, whereby three code bits are transmitted for every information bit. To reduce the coding rate, a known technique of puncturing can be applied. By transmitting only one parity bit for every information bit, the coding rate can be increased to 1/2.

The puncturing may be carried out by the optional switch 140 in FIG. 1, which alternates between the two streams of parity bits 130,135. In this manner, only half of the 2K parity bits are transmitted.

The use of turbo codes requires a decoder with higher complexity than that of a conventional decoder used for convolutional codes. One of the factors that affect the complexity of decoding turbo codes is the memory required for storing metrics during the decoding process. The generic block diagram of a"turbo decoder"200 is illustrated in FIG. 2. Systematic bits 110, together with parity bits 130 are received from the first encoder over a noisy channel, and are input to a first a- posteriori probability (APP) module 240.

The extrinsic information 215 that is output from the first APP module (APP module-1) 240 is input to an interleaver 245. Systematic bits 110 are also input directly to a second interleaver 220. The outputs from the first interleaver 245 and second interleaver 220, together with parity bits 135 from the second encoder, are input to a second APP module (APP module-2) 225.

Upon each iteration of the process, the extrinsic information 230, output from APP module-2 225, is input into a de-interleaver 235. The output of the de- interleaver 235 is input to APP module-1 240, as shown.

However, for the first iteration, zeros are input into APP module-1 240 instead of the de-interleaver output.

In a straightforward implementation of the known iterative forward-backward decoding algorithm, as performed by the APP Module-1 240 of the generic turbo decoder 200 of FIG. 2, the following procedures are carried out in each iteration : (i) Computation of the branch metrics for the entire received sequence of channel symbols.

(ii) Computation of forward path metrics, by processing the received sequence from start to end, i. e., (Yzsy2s7yk""yK)- (iii) Computation of backward path metrics, by processing the received sequence from end to start, i. e.

(yK,yK-1,yK-2,,,yk,,,,y1), once the data block has been received.

(iv) Computation of the a-posteriori probabilities (APPs) using the branch metrics, the forward path metrics and the backward path metrics.

For example, let us assume that the turbo decoder 200 operates on a vector of received data Y that is a noisy version of the signal generated by the turbo encoder.

Furthermore, let us assume that the vector Y is of length K and is defined as: Y = [y1,y2,,,yk,,,,yK]

An entry yk where 1 < k < K, of the above vector is a vector comprised of the systematic bit and the parity bits related to the k-th information bit. Thus, if a turbo encoder of coding rate 1/3 is used, then yk = [yk,sys,yk,p1,yk,p2] [2] Where : Yksys iS the systematic bit 110 and YkplYkp2 are the two parity bit streams 130,135 generated by the two respective constituent encoders 120,125.

It is convenient to express the a-posteriori probability as a logarithm of a ratio, The sign of the above expression indicates the most probable value of the yk bit. The magnitude of the expression is a measure of the reliability of the decision based on the sign-the larger the magnitude, the higher the reliability. If the magnitude is zero, then the two values of the bit are equally probable, i. e. there is no information on the value of this bit.

Turbo codes are commonly used with long blocks of data since, for a given constituent code and signal-to-noise ratio, the achieved performance improves with increasing size of the data block. The computation of the a- posteriori probabilities requires the availability of the branch metrics and forward and backward path metrics.

Therefore, if the four procedures outlined above are

carried out sequentially, the decoder needs several buffers. Each buffer needs to be the length of the received block of data, for storing intermediate results.

It is noteworthy that, in order to achieve good results, such data blocks are of a considerable word-length.

Thus, the straightforward implementation of the iterative procedure requires a large amount of storage memory.

Notably, in a turbo decoding operation, there are two types of input to each APP Module: (i) The signal received from the channel. This comprises the systematic bits 110 and the parity bits 130 pertinent to APP Module-1 240.

(ii) A-priori information about the information bits. APP Module-2 225, as described below, provides the a-priori information.

The APP Module-1 240 calculates the a-posteriori probabilities for all information bits in the data block.

The most important ingredient of this information is the extrinsic information 215. The extrinsic information 215 for a particular information bit is the novel information derived by APP Module-1 240. The extrinsic information 215 generated by APP Module-1 240 is provided to APP Module-2 225 as a-priori information.

The salient feature of the turbo decoding algorithm is the application of iterative decoding. Once APP Module-2 225 has completed its operation, the extrinsic information 230 generated by APP Module-2 225 is fed back to APP Module-1 240, which processes the data again in order to generate new extrinsic information that can be

provided to APP Module 225 for further processing. A single round of processing of the data by both APP Modules 225,240 is referred to as one iteration. The performance of the coding scheme usually improves as the number of iterations increases. In a practical scheme, the total number of iterations is generally constrained by considerations of complexity and tolerable delay in the decoder design.

In the encoder 100 of FIG. 1, the information has been interleaved 115 before being encoded by the second constituent encoder 125. Consequently, in FIG. 2, the extrinsic information 215 at the output of APP Module-1 240 has to be interleaved 245 before being fed to APP Module-2 225. For the same reason, the received systematic bits 110 have to be interleaved 220.

Evidently, the extrinsic information 230 at the output of APP Module-2 225 has to be de-interleaved 235 before it is provided to APP Module-1 240 as a-priori information.

After completing several iterations, the final decision 255 of the turbo decoder is derived by taking the de- interleaved (in de-interleaver 250) a-posteriori probabilities output from APP Module-2 225. As is evident from equation [3], all that is needed for the final decision on a particular bit is to determine the sign of the computation made by APP Module-2 225.

Note that FIG. 2 is a generic block diagram and alternative implementations of a turbo decoder may be used, depending upon the prevalent engineering considerations. For example, one decoder implementation may have only one APP Module that operates intermittently

as APP Module-1 and APP Module-2. Likewise, the same modules may be used for carrying out all iterations.

Alternatively, it is possible to design a turbo decoder by concatenating several decoding blocks in a pipeline configuration, so that each block performs one iteration.

The straightforward forward-backward algorithm for calculating the a-posteriori probabilities in the APP Module is illustrated in the flowchart 300 of FIG. 3.

The main feature of the straightforward approach, from the point of view of memory requirements, is that the branch and path metrics from the computation of the a- posteriori probabilities for all K'bits in the data block need to be stored in the three vectors G, A, B.

These vectors are of length K'and they store the branch metrics, the forward path metrics and the backward path metrics, respectively.

In the straightforward approach, the branch metrics of matrix Y are computed in step 310 (resulting in Vector G). Both APP Modules, in each iteration, then use the data in Vector G.

The calculation of the forward path metrics (Vector A), in step 315, and the calculation of the backward path metrics (vector B), in step 320, use the data stored in Vector G. The calculation of the a-posteriori probabilities (vector P) is then computed using the data stored in vectors A, B and G, as shown in step 325.

Thus, it can be clearly seen that the decoder needs a large amount of memory in order to store intermediate results of the forward path metrics and of the backward

path metrics. This is a significant problem in the implementation of practical turbo-decoders.

US Patent 5,933, 462, entitled"Soft decision output decoder for decoding convolutionally encoded codewords", with inventors A. J. Viterbi and N. T. Sindhishayana, proposes an alternative sliding window'technique for implementing a forward-backward algorithm. For completeness, it is noteworthy that such a forward- backward algorithm is of no practical interest for decoding convolutional codes, as it provides minimal advantage over the standard Viterbi algorithm for convolutional decoding, whilst being significantly more complex. Furthermore, the inventors of the present invention have recognised significant shortfalls in any possible application of US Patent 5,933, 462 to turbo- codes, as detailed later.

The purpose of the"sliding window"technique is to perform the calculation of the a-posteriori probabilities, whilst avoiding the necessity for allocating large memory buffers for storing intermediate results in the communication unit's receiver. However, a consequence of using a"sliding window"approach is a slight deterioration in performance.

The timing diagram 400 of FIG. 4 illustrates the"sliding window"approach as described in US Patent 5,933, 462 whereby the decoder uses one"Forward Processor"and two "Backward Processors". Notably, both Backward Processors access the storage memory.

For simplicity, let us assume that the number of time slots M'in the received data stream is, say, an even number. The notation Sm indicates the data in the m-th time slot. The block of data {yak} of length K'is partitioned into data slots of length L'. A preferred assumption is that there is an integer number of data slots in a block so that: M = K/L.

However, when there is not an integer number of data slots in a data block, the last data block would be configured to be a shorter or a longer data slot. In the 3GPP system, an opportunity for the last data block to be a shorter data block exists, since there is a tail at the end (i. e. the output state is known).

The integer M'is assumed to be even but all results can be readily adapted to an odd M'. The length of the data slot should be large enough to facilitate the proper operation of the Backward Processor (s). Thus, the length of a data slot needs to be several times the size of the constraint length of the constituent encoder.

Data slot Sm is defined as: Sn= {Xj} mL<j< (m+1) L-1, 0<m< M-1 [4] A block of data is given by {S0,S1,S2,,,,,SM-1} [5] so that a data block is a concatenation of M time slots.

The operation of the proposed forward-backward algorithm of US Patent 5,933, 462 is next analysed for applicability in decoding turbo codes. Several significant and critical features of this algorithm, which are pertinent to its applicability to decoding of turbo codes, are not addressed in US Patent 5,933, 462 as emphasized in the following discussion.

Let us assume, as is typically the case, that a turbo coding scheme with a block interleaver is used. The operation of the decoder starts after the entire block of data has been received. The clock rate for all processors should be much higher than the clock rate of the transmitted data. This is because the iterative decoding procedure for turbo codes should be capable of completing several iterations for each transmitted block of data in order to achieve a good performance.

The first timing diagram 405 presents the operation of the Forward Processor. Since the trellis encoder in the transmitter is initialised at a known trellis state, the Forward Processor in the receiver can be properly initialised. In this manner, the receiver is able to operate on the received block of data in a time slot-by- time slot fashion, i. e. processing data from slot So to slot SM-1.

In time slot m'the Forward Processor carries out the following operations: (i) If m>l ; discard the forward path metrics computed for data slot Sm-2-

(ii) Compute the forward path metrics for data slot Sm, (iii) Store the forward path metrics in the memory that was cleared in (i).

Notably, the Backward Processors in timing diagrams 410, 415 do not start their operation at the end of the data block. Thus, even if the turbo code has been properly terminated in the transmitter at a known trellis state, the Backward Processors 410,415 cannot be properly initialised. Therefore, the two Backward Processors always operate on pairs of adjacent time slots in the following manner.

The first Backward Processor is initialised at the end of the second time slot (time slot m=1) of the pair with equal weights to all trellis states. The first Backward Processor then proceeds backwards from the end of the second time slot computing backward path metrics that will not be used for decoding. This operation may be termed a"warm-up"mode of operation, as its sole purpose is to provide correct initialisation for computing path metrics in the first time slot of the pair.

The second Backward Processor then computes backward path metrics for the first time slot of the pair. This operation may be termed"active"mode.

The second time diagram 410 in FIG. 4 presents the operation of the first Backward Processor 410. The first Backward Processor starts operating at the same time as the Forward Processor 405 and operates according to the following rule in time slot m (where 0 < m < M-1) :

(i) If m is an even number, then the first Backward Processor 410 processes the data in a"warm-up" mode up to data slot Sm+1 (i. e. where 0 < m < M-2).

(ii) If m is an odd number, then the first Backward Processor 410 processes the data in an"active" mode up to data slot Sm_1 (i. e. where 1 < m < M-1).

The backward path metrics calculated by the first Backward Processor 410 are never stored in memory. These metrics are used as follows: (i) The metrics calculated in time slot m, when m is even, are calculated in a"warm-up"mode.

(ii) The metrics calculated in time slot m, when m is odd, are calculated in"active"mode and are used immediately upon calculation by the APP Calculator as described below.

The operation of the second Backward Processor 415 is carried out as presented in the third time diagram in FIG. 4. The operation of the second Backward Processor 415 starts at a delay of one time slot relative to the operation of the Forward Processor 405 and therefore ends one time slot later. The second Backward Processor 405 operates according to the following rule in time slot m (where 1 zu m < M): (i) If m is an odd number, then the second Backward Processor 415 processes the data in a"warm-up" mode up to data slot Sm+1 (where 1 < m < M-3).

(ii) If m is an even number, then the second Backward Processor 415 processes the data in an"active" mode up to data slot Sm-1 (where 2 < m < M).

It is noteworthy that, in the operation of the second Backward Processor 415 at the end of the data block, the odd time slot (M-1) fails to provide data for the second Backward Processor 415 to operate on in the"warm-up" mode. Therefore, the second Backward Processor 415 is just kept idle. In the even time slot M; the second Backward Processor 415 operates on the data in the last time slot, SM_1. There is no"warm-up"result for this time slot. Therefore, if the data block is properly terminated, the second Backward Processor 415 uses the known initialisation states. Otherwise, the second Backward Processor 415 just assigns equal weights to all states. In the latter case, the performance for the last data slot is poorer than for other data slots.

Nevertheless, the backward path metrics calculated by the second Backward Processor 415 are also never stored in memory. These metrics are used as follows: (i) The metrics calculated in time slot m, when m is odd, are calculated in"warm-up"mode and can be discarded as the first Backward Processor 410 moves to the next symbol.

(ii) The metrics calculated in time slot m, when m is even, are calculated in"active"mode and are used immediately upon calculation by the APP Calculator-as described below.

The operation of the APP Calculator is illustrated in the fourth timing diagram 420 in FIG. 4. The APP Calculator

computes the a-posteriori probabilities. Notably, its operation is delayed by one time slot relative to the operation of the Forward Processor. The APP Calculator uses the output of the Forward Processor 405 and either of the two Backward Processors 410 and 415 as follows: (i) The forward paths metrics that are computed by the Forward Processor are stored in memory. These metrics are used in all time slots.

(ii) The output of the first Backward Processor 410 is used for data slots Sm when m is even. These metrics are used in odd time slots.

(iii) The output of the second Backward Processor 415 is used for data slots Sm when m is odd. These metrics are used in even time slots.

The APP Calculator reads the forward path metrics computed by the Forward Processor 405 from memory.

However, the backward path metrics are used on a symbol- by-symbol basis, immediately after the first and second Backward Processors 410,415, compute them. As explained above, backward path metrics are provided by the first Backward Processor 410 for even time slots and by the second Backward Processor 415 for the odd time slots.

Notably, the following observation can be made: (i) The APP Calculator operates on the data slots in the correct sequential order, So, S1, S2, S3... SM_1.

(ii) The operation on data slots is delayed by one time slot relative to the operation of the Forward Processor 405. However, whilst processing any given time slot of data, the APP Calculator operates backwards on

the symbols in this time slot. This is because the APP Calculator uses the output of Backward Processors 410, 415 on a symbol-by-symbol basis, and these processors operate backwards.

The above observation has been identified by the inventors of the present invention as particularly problematic in the following aspects: (i) The data at the output 425 of the APP Calculator cannot be stored in a symbol-by-symbol basis sequentially, as the data is being output in a reversed order. Thus, a mechanism to reverse the addresses is needed and therefore the proper address has to be calculated for the a-posteriori probability for each symbol. This is a laborious task, when processing power and time are critical factors. Alternatively, one might store the data in a temporary memory and then send the memory content out in the correct order. However, this will require additional memory and cause further delay.

(ii) When the data at the output 425 of the APP Calculator has to be sent out serially, for example at the end of the last iteration, there is a delay of one time slot. This implies that the data slot Sm can be sent out at time slot m+2', i. e. with a delay of two time slots. This is demonstrated in the fifth timing diagram 425 of FIG. 4. A delay of one time slot is induced by the operation of the second Backward Processor 415. The additional delay, however, is induced by the reverse operation of the APP Calculator.

EP-1261139-A2 discloses a known mechanism and associated architecture to perform standard MAP decoding. The known mechanism calculates beta metrics in a reverse order,

stores in beta state metrics random access memory (RAM) and then calculates alpha state metrics. Notably, the technique disclosed in EP 1261139-A2 combines alpha state metrics and beta state metrics in an extrinsic block.

The technique disclosed in EP-1261139-A2 subsequently computes beta state metrics for a next sliding window.

EP-1261139-A2 uses a prolog section for the alpha calculation, which is basically a warm-up mode for the Forward Processor. Additionally, in EP-1261139-A2 the addressing scheme for the alpha metrics block is extremely complicated, i. e. it uses interleaved forward- reverse addressing. Thus, EP 1261139-A2 describes a known technique that suffers from many of the complexity and inefficiency problems of other known mechanisms.

Thus, a need exists for an improved turbo decoding mechanism, which alleviates the aforementioned problems of excessive storage requirements.

Statement of Invention In accordance with a first aspect of the present invention, there is provided a method for turbo decoding one or more data blocks in a data communication system, as claimed in Claim 1.

In accordance with a second aspect of the present invention, there is provided a data communication system, as claimed in Claim 12.

In accordance with a third aspect of the present invention, there is provided a communication unit, as claimed in Claim 14.

In accordance with a fourth aspect of the present invention, there is provided a storage medium storing processor-implementable instructions, as claimed in Claim 16.

In accordance with a fifth aspect of the present invention, there is provided a communication unit as claimed in Claim 17.

In accordance with a sixth aspect of the present invention, there is provided an integrated circuit as claimed in Claim 28.

Further aspects of the present invention are as claimed in the dependent Claims.

In summary, the inventive concepts hereinafter described propose an efficient implementation of a sliding window technique that can be used in a turbo decoder. In particular, the inventive concepts present a method for implementing a forward-backward decoding algorithm for decoding turbo codes that is more memory and processing efficient than the known prior art. In this regard, backward path metrics are stored in storage memory thereby enabling the APP Calculator to immediately use computed forward path metrics.

Brief Description of the Drawings FIG. 1 illustrates a block diagram of a known turbo encoder;

FIG. 2 illustrates a block diagram of a known turbo decoder; FIG. 3 is a flow chart illustrating a computation operation of the a-posteriori probabilities using a known straightforward forward-backward approach in a turbo decoder; and FIG. 4 shows a series of timing diagrams illustrating deficiencies in applying a sliding window approach to a turbo decoder according to US Patent 5,933, 462.

Exemplary embodiments of the present invention will now be described, with reference to the accompanying drawings, in which: FIG. 5 illustrates a data communication unit adapted in accordance with the preferred embodiment of the present invention; FIG. 6 illustrates a flowchart of a communication unit's receiver function with two Backward Processors having access to a storage memory, in accordance with the preferred embodiment of the present invention; FIG. 7 is a timing diagram illustrating the timing associated with the flowchart of FIG. 6; FIG. 8 illustrates a flowchart of the communication unit's receiver function with a single Backward Processor having access to a storage memory, in accordance with an enhanced embodiment of the present invention; and

FIG. 9 is a timing diagram illustrating the timing associated with the flowchart of FIG. 8.

Description of Preferred Embodiment The preferred embodiment of the present invention will be described with reference to a subscriber unit capable of turbo decoding, for example, a subscriber unit operating on a universal mobile telecommunication standard (UMTS) wireless communication system. In particular, the invention is applicable to the Third Generation Partnership Project (3GPP) specification for wide-band code-division multiple access (WCDMA) standard relating to the UTRAN radio Interface (described in the 3G TS 25. xxx series of specifications). However, it is within the contemplation of the invention that the inventive concepts hereinafter described are equally applicable to other wireless or wired communication systems, for example, a CDMA-2000 system that supports turbo-decoding techniques.

A skilled artisan would further appreciate that the inventive concepts hereinafter described could be used within any element in the UMTS infrastructure that is capable of performing decoding operations, for example, a base transceiver station (termed a Node-B in UMTS parlance).

It is also within the contemplation of the invention that the decoding operations may be alternatively controlled, implemented in full or implemented in part by adapting any suitable part of the communication system. For example, equivalent elements such as intermediate fixed

communication units in other types of systems may, in appropriate circumstances, be adapted to provide or facilitate the turbo decoding mechanism as described herein.

In the preferred and enhanced embodiments of the present invention, as described later, the turbo decoding operation includes computation of the a-posteriori probabilities (APPs) using an APP Calculator. The APP Calculator uses branch metrics, forward path metrics and backward path metrics generated by one or more processors. In the context of the present invention, the APP Calculator is a preferred example of a processor function that processes received data blocks in order to produce a data decoded output. In this regard, it may be viewed as an example of a data block determination function.

Turning now to FIG. 5, there is shown a block diagram of a wireless subscriber unit 500, generally termed a user equipment (UE) or user terminal (UT) in UMTS parlance, adapted to support the inventive concepts of the preferred embodiments of the present invention. As known in the art, and described here for completeness, the UE 500 contains an antenna 502 preferably coupled to a duplex filter or antenna switch 504 that provides isolation between receive and transmit chains within the UE 500. The receiver chain 510, as known in the art, includes receiver front-end circuitry 506 (effectively providing reception, filtering and intermediate or base- band frequency conversion). The front-end circuit is serially coupled to a signal processing function 508. In the preferred embodiment of the present invention, a

portion of the signal processing function 508 is configured as a turbo decoder 509 to provide turbo- decoding of received signals. In practice, the signal processing function 508, in the context of the present invention, will perform numerous other signal processing tasks (not shown), such as: symbol timing recovery, demodulation, de-multiplexing, de-interleaving, re- ordering, etc.

An output from the signal processing function is provided to a suitable output device 511, such as a screen or flat panel display. The receiver chain 510 also includes received signal strength indicator (RSSI) circuitry 512, which in turn is coupled to a controller 514 for maintaining overall UE control. The controller 514 is also coupled to the receiver front-end circuitry 506 and the signal processing function 508 (generally realised by a digital signal processor (DSP) or dedicated hardware such as an application specific integrated circuit (ASIC) ). The controller is also coupled to a memory device 516 that stores operating regimes, such as decoding/encoding functions, synchronisation patterns, code sequences and the like. It is within the contemplation of the invention that the memory device, and/or a further memory element may be included within the signal processing function 508 and/or turbo decoder 509.

For completeness, as regards the transmit chain 520, this essentially includes an input device 520, such as a keypad, coupled in series through transmitter/modulation circuitry 522 and a power amplifier 524 to the antenna 502. The transmitter/modulation circuitry 522 and the

power amplifier 524 are operationally responsive to the controller. Furthermore, a timer 518 is operably coupled to the controller 514 to control the timing of operations (transmission or reception of time-dependent signals) within the UE 500. Of course, the various components within the UE 500 can be realised in discrete or integrated component form, with an ultimate structure based on design, size and cost considerations.

In accordance with the preferred embodiment of the invention, the signal processing function 508, which may be a baseband (back-end) signal processing receiver integrated circuit (IC) in some embodiments, and particularly the turbo decoder 509 has been adapted to incorporate the inventive concepts described below.

In particular, the turbo decoder 509 has been adapted to compute backward path metrics and store the backward path metrics in a more time, processing and memory efficient manner than known prior art turbo decoders. In this regard, the use of memory device 516 has been adapted such that it temporarily stores backward path metrics- as provided by a Backward Processor function.

Preferably, the turbo decoder 509 has been further adapted to compute forward path metrics and calculate and outputs a-posteriori probabilities of the received data blocks based on the stored backward path metrics and the substantially immediately computed forward path metrics.

Advantageously, the Forward Processor is delayed to operate at a substantially concurrent time as the APP Calculator, thereby enabling the turbo decoder 509 to determine the a-posteriori probabilities of the received

bits immediately, i. e. without a need to store forward path metrics. In this manner, a significant saving on the memory storage of metrics is achieved. Furthermore, the turbo decoder 509 is able to determine the a- posteriori probabilities of the received symbols in a sequentially ordered manner. The operation of the adapted turbo decoder 509 is described further with respect to the flowcharts of FIG. 6 and FIG. 8.

The various adapted components within the UE 500 can be realised in discrete or integrated component form. More generally, the functionality associated with turbo decoding of received data blocks may be implemented in a respective communication unit in any suitable manner.

For example, new signal processing functionality may be added to a conventional UE (or Node B), or alternatively existing processing functions of a conventional communication unit may be adapted, for example by reprogramming one or more processors therein. As such, the adaptation of, for example, the signal processing function 508 and/or turbo decoder 509 in the preferred embodiment of the present invention, may be implemented in the form of processor-implementable instructions stored on a storage medium, such as a floppy disk, hard disk, PROM, RAM or any combination of these or other storage media.

Notably, the timing of the APP Calculator operations is performed in a much-improved manner using a significantly reduced amount of memory. The preferred timing of operations is presented in the timing diagrams of FIG. 7 and FIG. 9.

It is convenient to start the description of the improved decoding procedure with the Backward Processors. As is evident from comparing the timing diagrams in FIG. 7 to those in FIG. 4, the Backward Processors operate with the same timing for computation of backward path metrics of respective data slots (So to Sm). However, the inventors of the present invention have recognised the benefits to be gained from storing backward path metrics, rather than storing forward path metrics, in storage memory.

In accordance with the preferred embodiment of the present invention, the procedures related to the Backward Processors are as illustrated in the flowchart 600 of FIG. 6. In this embodiment, it is assumed that both Backward Processors have access to a memory, say memory device 516 in FIG. 5. In the preferred embodiment of the present invention, the two Backward Processors alternate between"active"mode and"warm-up"mode. Thus, the first Backward Processor is arranged to be in its "active"mode, whilst the second Backward Processor is arranged to be in a"warm-up"mode, for example, during odd time slots. In the alternate even'time slots, the first Backward Processor is arranged to be in a"warm-up" mode, whilst the second Backward Processor is arranged to be in an"active"mode. Clearly, a skilled artisan would appreciate that the reverse arrangement of the Backward Processors in even/odd time slots could be applied.

Initially, a data block is received in step 602 and the time slot counter is set to m=0', as shown in step 605.

A determination is then made, in step 610, as to whether m=0'or m'is even and less than or equal to M-2'.

If it is determined, in step 610, that m'is 0'or even and less than or equal to M-2', then on data slot Sm+i, the first Backward Processor is set to function in a "warm-up"mode, in step 615. The process then moves on to step 620. If it is determined, in step 610, that m' is either odd or greater than M-2', then the process also moves to step 620.

If m'is even, greater than '0' and less than or equal to M', in step 620, then on data slot Sm-1, the second Backward Processor is set to function in an"active" mode, in step 625. The process then moves to step 630.

If it is determined, in step 620, that 'm' is either odd or greater than M', then the process also moves to step 620.

Thus, in accordance with the preferred embodiment of the present invention, in an even time slot m, where 2 < m < M or when 'm=0', the second Backward Processor computes the backward path metrics and stores the computed metrics in the storage memory.

The process then moves to step 630, where a further determination is made as to whether m'is odd and 'm' is less than or equal to M-1'. If it is determined, in step 630, that m'is odd and less than or equal to M- 1', then the first Backward Processor is set to function in an"active"mode on data slot Sm-1. The process then moves to step 636. If it is determined, in step 630, that 'm' is even or greater than M-1', then the process also moves on to step 636.

In step 636, a determination is made as to whether m'is odd and less than or equal to M-3'. If it is determined in step 636 that m'is odd and less than or equal to M- 3', then the second Backward Processor functions in a "warm-up"mode on data slot Sm+l, as shown in step 638.

The process then moves on to step 640. If it is determined in step 636 that m'is even or greater than M-3', then the process also moves on to step 640.

In step 640, a further determination is made as to whether 'm' is within the range l<m<M+1'. If it is determined, in step 640, that 'm' is within the range 1<m<M+1', then the Forward Processor is activated on data slot Sm_2. If it is determined, in step 640, that m'is greater than M+1'or m'is less than '2', then the process moves on to step 650.

Advantageously, the Forward Processor operates on the data in sequential time slots and starts its operation at a delay of two time slots relative to the start of the operation of the first Backward Processor. In this manner, the Forward Processor is able to compute forward path metrics immediately after, and independently of, the first or second Backward Processor has computed backward path metrics.

Furthermore, in step 645, the a-posteriori probability (APP) Calculator is activated. Advantageously, the APP Calculator now immediately uses the computed forward path metrics on a symbol-by-symbol basis, without incurring any delay. Thus, there is no need to store forward path metrics. In this manner, the received data blocks are

decoded and output, as shown in step 648. The process then moves onto step 650.

In step 650, a determination is made as to whether the current time slot being processed is the last time slot in the data block, i. e. is m<M. If m<M in step 650, the data being processed in the time slot is not the last data in the data block, and m is incremented so that the next time slot is selected for processing, in step 655.

When m>2, the previously stored backward metrics in data slot Sm-3 (in the respective odd or even time slot) are then preferably deleted in step 658, after they have been used in the calculation of APPs. In this manner, there is a minimum use of the storage memory. The process then moves to step 610, and repeats. If the data slot just processed by the Forward Processor was the last data slot, i. e. m=M+1 in step 650 and no more data remains in the data block, then the calculation of APPs for this data block ends in step 660.

In particular, the respective timing of the operations of the preferred embodiment of the present invention is shown in the timing diagram 700 of FIG. 7. The timing of Forward Processor operations 700 is illustrated in the first timing diagram. As shown, the Forward Processor delays the processing of data by two time slots with respect to the first Backward Processor commencing in a warm-up'mode.

The second timing diagram illustrates the timing and mode of operation of the first Backward Processor 710. The third timing diagram shows the timing and mode of operation of the second Backward Processor 715. Notably,

when the first Backward Processor is in an active'mode generating backward path metrics, the second Backward Processor is operating in a warm-up'mode. Similarly, when the second Backward Processor is in an active' mode, the first Backward Processor is operating in a warm-up'mode.

In particular, the respective Backward Processor that is operating in a warm-up'mode calculates backward path metrics so that it is able to provide proper initialisation for computing and storing backward path metrics by the alternate Backward Processor operating in an active'mode.

Of particular advantage is the timing of the APP Calculator 720, in that the APP Calculator 720 operates in any given time slot, on the same data slot that the Forward Processor operates on. The APP Calculator uses the forward path metrics generated by the Forward Processor 705, on a symbol-by-symbol basis, as soon as the Forward Processor computes them.

As is evident from the above processes, the APP Calculator operates on the symbols within each data slot in the correct (forward) sequence. Thus, the APP Calculator provides the a-posteriori probabilities for all symbols in the data block in their correct sequential order. Advantageously, there is no need, in this embodiment, to store the output of the APP Calculator in a buffer, before the output of the APP Calculator is able to be delivered to the next stage of the decoding procedure.

In the improved decoding procedure, as described above, the two Backward Processors are configured to alternate their operations in"active"mode. This implies that both Backward Processors also take turns in storing the completed backward path metrics in memory.

In accordance with an enhanced embodiment of the present invention, the inventors of the present invention have recognised that, for some hardware architectures, it might be advantageous to have only one Backward Processor accessing the memory device. A single Backward Processor can be provided access to the memory device, in the manner illustrated in the flowchart 800 of FIG. 8, and operating in accordance with the timing diagrams presented in FIG. 9. The Forward Processor and the APP Calculator operate in the same manner as the improved procedure described with reference to FIG. 6 and FIG. 7.

Referring now to FIG. 8, a flowchart of an enhanced embodiment of the present invention is shown. In this embodiment, it is assumed that only one, say the second, Backward Processor operates in an"active"mode, having access to the storage memory. The other (first) Backward Processor continuously operates in a"warm-up"mode.

Clearly, a skilled artisan would appreciate that the reverse arrangement of the Backward Processors could be applied, i. e. the first Backward Processor could operate continuously in an"active"mode, whilst the second Backward Processor operates continuously in a-"warm-up" mode.

Initially, a data block is received in step 802 and the time slot counter is set to m=0, as shown in step 805. A

determination is then made, in step 810, as to whether m'is less than M-1'. If it is determined, in step 810, that m'is less than M-1', then on data slot Sm+1/ the first Backward Processor is set to function in a "warm-up"mode, in step 815. The process then moves to step 820, where a further determination is made as to whether m'is greater than 0'and less than or equal to M'. If it is determined, in step 810, that m'is greater than or equal to M-1', then the process also moves to step 820.

If it is determined, in step 820, that m'is greater than 0'and less than or equal to'M', then on data slot Sm_1, the second Backward Processor is set to function in an"active"mode, as shown in step 825. Thus, in accordance with the enhanced embodiment of the present invention, only the second Backward Processor computes the backward path metrics and stores the computed metrics in the memory device.

The process then moves to step 830, where a further determination is made as to whether 'm' is in the range l<m<M+1'. If it is determined, in step 820, that m'is greater than M', then the process also moves to step 830. If it is determined, in step 830, that m'is in the range l<m<M+1', in step 830, then on data slot Sm-2, the Forward Processor is activated.

Again, and advantageously, the Forward Processor operates on the data sequentially. The Forward Processor also starts its operation at a delay of one time slot relative to the start of operation of the second Backward

Processor, as indicated in FIG. 9. In this manner, the Forward Processor is able to compute forward path metrics immediately after the second Backward Processor computes and stores backward path metrics. Thus, there is again no need to store forward path metrics.

Furthermore, in step 835, the APP Calculator is activated to compute the a-posteriori probabilities (APP) for the received sequence. Again, and advantageously, the APP Calculator now immediately uses the computed forward path metrics on a symbol-by-symbol basis. Concurrently, the APP module reads the backward path metrics from storage memory. The process then moves onto step 838. In this manner, data blocks are decoded and output, as shown in step 838. The process then moves onto step 840. If it is determined, in step 830, that m'is greater than M+1'or less than 2, then the process also moves on to step 840.

In step 840, a determination is made as to whether the current time slot being processed is the last time slot in the data block, i. e. is m<M. If m<M in step 840, the data just processed is not the last data from the data block, and m is incremented so that the data in the next time slot can be processed, in step 845. The previously stored backward metrics in data slot Sm_3 are then preferably deleted in step 848, after the metrics are used in the calculation of APPs. The process then moves to step 810, and repeats.

If the data just processed is the last data from the data block, i. e. m=M+1 in step 840, then the process is completed for that data block in step 850.

Again, in accordance with the enhanced embodiment of the present invention, at the beginning of each odd or even time slot, the backward path metrics for the previous respective odd or even time slot are deleted before starting the new backward path metric computations in the "active"mode.

Referring now to FIG. 9, a series of timing diagrams 900 are illustrated that illustrate the timing of the Forward Processor 905 and Backward Processors 910,915 and APP Calculator 920 of the enhanced embodiment of the present invention. Notably, the primary difference between the timing diagrams of the preferred embodiment of FIG. 7 and the enhanced embodiment of FIG. 9 is in the division of tasks between the two Backward Processors 910 and 915.

In the enhanced embodiment's timing diagram of FIG. 9, the first Backward Processor 910 operates continuously in the"warm-up"mode whilst the second Backward Processor 915 operates continuously in the"active"mode.

In each time slot, the first Backward Processor 910 therefore provides proper initialisation for the backward path metrics to be subsequently computed by the second Backward Processor 915 in a subsequent time slot. Since only the second Backward Processor 915 operates in an "active"mode, only the second Backward Processor 915 writes the backward path metrics to the storage memory.

Advantageously, both Backward Processors process data slots in a sequential manner (... Sm-1, Sm, Sm+1 etc...).

It is within the contemplation of the present invention that the designer of the turbo-decoding scheme will

select the preferred or enhanced embodiment configurations dependent upon the actual hardware architecture used in the communication unit.

The preferred embodiment of the present invention is described with reference to a turbo decoder used in a subscriber device. However, it is envisaged that any device or processor function in any element in the communication system that is capable of performing decoding operations, particularly turbo decoding, could be adapted to incorporate the inventive concepts described herein.

Furthermore, it is envisaged that integrated circuit manufacturers are able to manufacture integrated circuits to perform the decoding operations, for example turbo- decoding operations, as hereinbefore described. Thus, although the preferred and enhanced embodiments of the present invention have both been described with reference to an application where a 3G wireless communication unit employs turbo decoding, it is within the contemplation of the invention that other applications and devices may benefit from the inventive concepts described herein. In a similar vein, it is also within the contemplation of the invention that, the decoding operation may be performed by any communication unit, for example a subscriber unit or a base transceiver station (BTS) in any suitable communication system, for example CDMA 2000.

In accordance with the preferred embodiment of the present invention, the improved decoding operation uses one Forward Processor and two Backward Processors.

However, it is envisaged that other decoding

configurations may benefit from the inventive concepts of the present invention, for example, using a single Backward Processor and use it alternatively as the first Backward Processor and the second Backward Processor.

This configuration would save hardware and/or save processing time when it is implemented in software, at the expense of a longer delay. It is therefore envisaged that such a configuration may be employed when delay is less important than savings on processor functionality.

Taking this philosophy one step further, it is envisaged that a single processor could perform the first and second Backward Processor functions as well as the Forward Processor function, if desired. This configuration would provide further savings in hardware and/or processing time when it is implemented in software, at the expense of an even longer delay.

Furthermore, although the present invention has been described with reference to turbo decoding, it is within the contemplation of the invention that may be applied to other decoding techniques.

In conclusion, it will be understood that the above- described inventive concepts, or at least embodiments thereof, tend to provide at least some of the following advantages, singly or in combination: (i) The turbo decoding process of the preferred and enhanced embodiments described above provide a superior sliding window decoding technique with reduced storage memory of path metrics, compared to US 5,933, 462.

This has been achieved by changing the timing of the

Forward Processor and of the two Backward Processors.

Furthermore, in this manner, the APP Calculator is able to compute APP probabilities for the received data in the correct (forward) sequence.

(ii) When there is an advantage in having only one Backward Processor accessing the storage memory, the turbo decoding process of the enhanced embodiment can operate according to the timing diagram of FIG. 9. In this manner, the first Backward Processor is configured to be always in the"warm-up"mode and the second Backward Processor is always in the"active mode".

Hence, only one Backward Processor is configured to store the calculated backward path metrics in the storage memory.

Whilst the specific and preferred implementations of the embodiments of the present invention are described above, it is clear that one skilled in the art could readily apply variations and modifications of such inventive concepts.

Thus, a method for turbo decoding one or more data blocks, a communication system and a communication unit have been described, whereby the disadvantages associated with known prior art arrangements have been substantially alleviated.