Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
STRUCTURED LDPC DESIGN WITH VECTOR ROW GROUPING
Document Type and Number:
WIPO Patent Application WO/2006/065286
Kind Code:
A1
Abstract:
A structured parity-check matrix H is proposed, wherein H is an expansion of a base matrix Hb. Base matrix Hb comprises a section Hb1 and a section Hb2. Section Hb2 comprises column hb having weight wh>=3 and H b2 having a dual-diagonal structure with matrix elements at row i, column j equal to 1 for i=j, 1 for i=j+1, and 0 elsewhere. The 1's of hb and Hb1 are arranged such that mb/q groups can be formed so that the q rows of Hb within each group do not intersect. Further more, the rows of base matrix Hb can be permuted such that every two consecutive rows do not intersect.

Inventors:
BLANKENSHIP YUFEI W (US)
BLANKENSHIP T KEITH (US)
CLASSON BRIAN K (US)
Application Number:
PCT/US2005/025277
Publication Date:
June 22, 2006
Filing Date:
July 18, 2005
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MOTOROLA INC (US)
BLANKENSHIP YUFEI W (US)
BLANKENSHIP T KEITH (US)
CLASSON BRIAN K (US)
International Classes:
H03M13/05
Domestic Patent References:
WO2004102810A12004-11-25
Foreign References:
US6718508B22004-04-06
US83999504A2004-05-06
Other References:
ROBERT XU ET AL., HIGH GIRTH LDPC CODING FOR OFDMA PHY, 3 November 2004 (2004-11-03), Retrieved from the Internet
See also references of EP 1829222A4
Attorney, Agent or Firm:
Haas, Kenneth A. (Schaumburg, IL, US)
Download PDF:
Claims:
Claims
1. A method for operating a transmitter that generates paritycheck bits p(po, • • •> pm\) based on a current symbol set s=(s0, ..., ?*i), the method comprising the steps of: receiving the current symbol set S={SQ, ■ ■ ■■, Sk\),' using a matrix H to determine the paritycheck bits; and transmitting the paritycheck bits along with the current symbol set; wherein H is an expansion of a base matrix Hb having mt, rows with Hb comprising a section Hj, i and a section Hb2, and Hb2 comprises column hb having weight Wh>=3 and H b2 having a dualdiagonal structure with matrix elements at row 1 column y equal to 1 for i=j, 1 for i=j+l, and 0 elsewhere; wherein l's of hb and Hb i are arranged such that one or more groups of the rows of Hb can be formed so that rows of Hb within each group do not intersect.
2. The method of claim 1 wherein rows within the one or more groups are substantially the m/, rows.
3. The method of claim 1 wherein there are rn^Jq groups of the rows of Hb each group having q rows.
4. The method of claim 1 wherein the rows of base matrix Hb can be permuted such that every two consecutive rows do not intersect.
5. The method of claim 1 wherein the plurality of groups do not have uniform sizes.
6. The method of claim 1 wherein when expanding the base matrix Hb to paritycheck matrix H, identical submatrices are used for each of the l 's in each column of H'b2, and the expansion uses paired submatrices for an even number of l 's in hb.
7. The method of claim 1 where the submatrices are zxz shifted identity matrices.
8. An apparatus comprising: storage means for storing a matrix H; and a digital logic receiving a signal vector and estimating the information block S=(5Ό, • • •, Sk\) based on the received signal vector and the matrix H; wherein H is an expansion of a base matrix Hb with Hb comprising a section Hbi and a section Hb2, and Hb2 comprises column hb having weight Wh>=3 and H^ having a dualdiagonal structure with matrix elements at row /, column y equal to 1 for i=j, 1 for i=j+\, and 0 elsewhere; wherein the l's of hb and Hbi are arranged such that one or more groups of the rows of Hb can be formed so that the rows of Hb within each group do not intersect.
9. The apparatus of claim 8 wherein the rows of base matrix Hb can be permuted such that every two consecutive rows do not intersect.
Description:
STRUCTURED LDPC DESIGN WITH VECTOR ROW GROUPING

The present invention relates generally to encoding and decoding data and in particular, to a method and apparatus for encoding and decoding data utilizing low- density parity-check (LDPC) codes.

Background of the Invention

As described in United States Patent Application Serial No. 10/839995, which is incorporated by reference herein, a low-density parity-check (LDPC) code is a linear block code specified by a parity-check matrix H. In general, an LDPC code is defined over a Galois Field GFfø), q>2. If q=2, the code is a binary code. All linear block codes can be described as the product of a k-bϊt information vector Si x * with a code generator matrix G^ n to produce an n-bit codeword X 1 xn , where the code rate is r=k/n. The codeword x is transmitted through a noisy channel, and the received signal vector y is passed to the decoder to estimate the information vector Si x *.

Given an n-dimensional space, the rows of G span the /r-dimensional codeword subspace C, and the rows of the parity-check matrix H mxn span the Tridimensional dual space C 1 , where m-n-k. Since x=sG and GH τ =0, it follows that xH τ =0 for all codewords in subspace C, where "T" (or "7"") denotes matrix transpose. In the discussion of LDPC codes, this is generally written as

Hx τ = 0 T ; (1)

where 0 is a row vector of all zeros, and the codeword x=[s p]=[s 0 , s \ > - - -A-I, Po, p\, ..., ., Sk-\ are the systematic bits, equal to the information bits within the information vector. For an LDPC code the density of non-zero entries in H is low, i.e., there are only a small percentage of l 's in H, allowing better error-correcting performance and simpler decoding than using a dense H. A parity-check matrix can be also described by a bipartite graph. The bipartite graph is not only a graphic description of the code but also a model for the decoder. In the bipartite graph, a codeword bit (therefore each column of H) is represented by a variable node on the left, and each parity-check equation (therefore each row of H) is represented by a check node on the right. Each variable node corresponds to a column of H and each check node corresponds to a

row of H, with "variable node" and "column" of H referred to interchangeably, as are "check node" and "row" of H. The variable nodes are only connected to check nodes, and the check nodes are only connected to variable nodes. For a code with n codeword bits and m parity bits, variable node v, is connected to check node C 7 by an edge if codeword bit / participates in check equation j, i - 0, 1, ..., n-\,j = 0, 1, ..., m- 1. In other words, variable node i is connected to check nodey if entry h β of the parity- check matrix H is 1. Mirroring Equation (1), the variable nodes represent a valid codeword if all check nodes have even parity.

An example is shown below to illustrate the relationship between the parity- check matrix, the parity-check equations, and the bipartite graph. Let an n = 12, rate- 1/2 code be defined by

with the left side portion corresponding to k (=6) information bits s, the right side portion corresponding to m (=6) parity bits p. Applying (1), the H in (2) defines 6 parity-check equations as follows:

x 0 + X 2 + x b + X 7 = 0 = 0 X 2 + X 5 + X 6 + X 8 + X 9 = 0 = 0 x, + x 4 + x, 0 + X n = 0 X 3 + X 5 + X 6 + X 1 , = 0

(3)

H also has the corresponding bipartite graph shown in FIG. 1.

The general LDPC code described above may not be easy to implement in practice. Structures are often introduced into the parity-check matrix to allow fast encoding and decoding without sacrificing the error-correcting performance. A structured LDPC code design starts with a small m^xti b binary base matrix H b , makes z copies of H b , and interconnects the z copies to form a large mxn H matrix, where m= mι,χz, n= ri b xz. Using the matrix representation, to build an H from H b each 1 in H b is replaced by a zxz permutation submatrix, and each 0 in H b is replaced by a zxz

all-zero submatrix. The representation of the expansion of Hb is called the model matrix and is denoted by H t , m . Thus H t , m is simply a shorthand notation for H when z is known. This procedure essentially maps each edge of H b to a vector edge of length z in H, each variable node of H b to a vector variable node of length z in H, and each check node of H b to a vector check node of length z in H. For a structured LDPC, the zxz submatrix may be a permutation matrix, a sum of permutation matrices, or any type of binary matrix. Since a permutation matrix P has a single 1 in each row and a single 1 in each column, the weight distribution of the expanded matrix H is the same as the base matrix H b if the permutation submatrix is used. Therefore, the weight distribution of H b is chosen as close to the desired final weight distribution as possible. The permutation submatrices comprising H can be very simple without compromising performance, such as simple cyclic shifts and/or bit-reversals. In the case of cyclic shifts, H bm can be written by replacing the l 's in H b by non-negative integers that represent the shift size and the O's in H b are replaced by 1.In the transmitter, a vector u of k information bits is encoded based on H (or equivalently Hbm) to produce a vector x of n code bits, where k b = («z,-wz > )- Vector x is sent through a noisy channel and a vector y of n contaminated signals are received. At the receiver, the LDPC decoder attempts to estimate x based on received vector y and the parity-check matrix H. The receiver obtains a contaminated version y of the transmitted codeword x. To decode y and estimate the original information sequence s, an iterative decoding algorithm, such as belief propagation, is usually applied based on the bipartite graph. Soft information in the format of log-likelihood ratios (LLR) of the codeword bits is passed between the bank of variable nodes and the bank of check nodes. The iteration is stopped either when all check equations are satisfied or a maximum allowed iteration limit is reached.

Structured LDPC codes may also be decoded with a layered decoder. A layered decoder typically has hardware to processes an entire row at one time. The layered decoder can potentially reduce the number of iterations required to achieve a given level of performance, and potentially increase throughput if not enough hardware exists to process all block rows at one time. Layer grouping can also be used where the base matrix H b is constrained such that groups of base rows do not intersect, which means that the base rows within a group have at most a single 1 within a base column (or equivalently, within each group the rows of H bm have at most a single non-negative entry within a column). Layer grouping can be used to further increase LDPC decoder speed since fewer iterations are needed to achieve a certain error-correcting performance.

In addition, the base matrix and assignment of permutation matrices for a given target parity check matrix H can be designed to provide an LDPC code that has good error-correcting performance and can be efficiently encoded and decoded. In United States Patent Application Serial No. 10/839,995, a structured parity-check matrix H is described, wherein H is an expansion of a base matrix H b and wherein H b comprises a section H b i and a section H b2 , and wherein H b2 comprises a first part comprising a column h b having an odd weight greater than 2, and a second part comprising matrix elements for row /, column j equal to 1 for i=j, 1 for /=7+1, and 0 elsewhere. The expansion of the base matrix Hb uses identical submatrices for Is in each column of the second part H' b2 , and the expansion uses paired submatrices for an even number of Is in h b .

Although layered decoding with layer grouping can be used to potentially reduce the amount of processing and potentially increase throughput, a technique does not exist for designing the base matrix and assigning the permutation matrices for a given target H size which allows efficient encoding and layered decoding with layer grouping. Therefore, a need exists for building features into structured LDPC codes which can be encoded efficiently and high-speed layered decoded.

Brief Description of the Drawings

FIG. 1 shows a parity-check processing flow.

FIG. 2 through FIG. 4 shows FER performance for rate 1 A, 2/3, and 3 A.

Detailed Description of the Drawings

For efficient encoding and good error-correcting performance in a structured LDPC design, the parity portion H b2 comprises groups of size q within H b2 , and the same grouping Of Hb 2 are extended to the information portion H b i. Specifically, the parity-check matrix H is constructed as follows.

(1). The parity portion H b2 has the format

H i2 = [h 4 ! H' 62 ]

The column h b has weight w h >=3, and the 1 's of h b are arranged such that mtjq groups can be formed, where the q rows of H b2 within each group do not intersect.

Two or more base rows (of H b or H b2 ) are said to not intersect if the group of rows have at most one 1 entry within each base column. In other words, two rows do not intersect if the dot product of the two rows is zero.

In one example, group j contains rows with index g(j)={j, j+mb/q, j+2 χ mb/q, one example of column h b contains three non-zero entries hb(O) = 1, ^(m^-l) - 1, and hb(a) = 1, where α £g(0), a<£g(mh/q-\). For example, when q=2 and mb=24, the pairs of base rows j and j+12 have at most one non-zero entry, for j=0 to 11. For m b =12 and q=2, the hb column preferably has weight 3, with h b (O)=l and h b (mb-l)=l and hb(α)=l .

In another example, the groups are non-uniform, where at least one group has a different number of rows than another group. Note that by definition the first and second base row cannot be in the same group because the rows intersect. The Two adjacent rows intersect because the second part of H b2 comprises matrix elements for row /, column j equal to 1 for i=j, 1 for i=j+l, and 0 elsewhere. However, in the decoder the order of processing rows can be changed without affecting performance. Therefore, base row or base column permutations and rearrangements of the H b may be performed while still maintaining desirable decoding properties. It is straightforward to show that rows and columns may be rearranged such that all rows within a group are adjacent, while still maintaining the non-intersection properties of the construction above.

(2) The information portion H b i is constructed such that all q rows within group g(J) intersect in <=/ positions, where / is equal to the number of columns in H b that have column weight greater than m \ Jq. If the column weights are less than or equal to m b lq conditions (2) can be covered in condition (1) by stating that the rows of

H b within a group do not intersect.

In some systems, different base matrices may be needed. For example, multiple base matrices may be used for a given code rate, or one base matrix may be

used for each code rate. The non-intersecting construction described may be used for a subset of all base, matrices. For example, if three base matrices with the same number of columns are needed for R = Vi, 2/3, and 3 A, then the non-intersecting construction may be used for R = Vi and 2/3, but not used for R = 3 A to maintain good performance. Alternatively, the non-intersecting construction may be used for R=l/2 and not for R=2/3 and R=3/4.

As an example of performance considerations, consider base matrices for code rates 1/2, 2/3, and 3/4 that have 24 columns and 12, 8, and 6 rows, respectively. The performance of the resulting code may suffer, especially when the size of the base matrix is relatively small (e.g., R=2/3 and 3/4, 24 columns). This is because good performance requires a good column weight distribution (particularly for the columns associated with the information positions). For example, weight-4 columns may be required for good performance in a R=3/4 code, but a 6x24 base H matrix may only have up to weight 3 under the constraint that pairs of base matrix rows have no overlapping entries.

Implementation Architecture

The processing of groups of bits in a structured code is examined in further detail. For a structured LDPC code with expansion factor z, the z parity checks within a vector check node (corresponding to a row of the base matrix) can be computed in parallel. This is because the code structure (which contains permutations) guarantees that the message from any given variable node within a vector variable node (corresponding to a column of the base matrix) is needed by at most one of the z parity check node within a vector check node. An exemplary block diagram of the parity check processing flow is provided in Figure 1. The grouped messages μy from vector variable nodes to vector check node /, 1 ≤j ≤ d r (i), corresponding to the d r (i) non-zero entries of the z-th row of the base matrix are cyclically permuted according to the permutation submatrix Py, 1 ≤j ≤ d r (i), and presented to the z parallel parity check circuits C/ within vector check node i, 1 ≤ l ≤ z. The parity check circuitry produces messages which are inverse permuted to obtain the updated messages μ/new), which can be utilized in consequent decoding steps. Note that d,{ι) is denoted k in the figure.

The digital logic in the processing blocks of Figure 1 can be entirely pipelined, that is, the intermediate results stored by any register are not needed to generate results for any previous register in the circuit. As described in the figure, once messages are passed into the circuit, updated messages are generated D cycles later.

Given this model, consider a base matrix where for any two rows, say r and s, the sets of columns with non-trivial (zero) entries do not intersect. Thus, the vector parity check nodes corresponding to these two rows use (and update) entirely different sets of messages, which are related to two different sets of vector variable nodes. In this case, since the circuitry of Figure 1 is pipelined, the vector parity checks for both of row r and row s can be computed in D+\ cycles. This is done by feeding the messages for row s one cycle later than for row r into a processing unit depicted in Figure 1. If the messages for row r were fed in at time t, they will be updated at time t+D, followed by the update of row s messages at time M-ZH-I. This can be represented with Figure 1 and another copy of Figure 1 below the first copy offset by a cycle.

In a fully pipelined approach, the base matrix is designed so that the rows of H b can be divided into m \ j2 groups, where for the two rows within a group, the sets of columns with non-trivial entries do not intersect. Note that the grouped rows do not have to be consecutive because the decoder could be controlled to process the parity check matrix rows out of order. Alternatively, the vector rows of the parity-check matrix H can be permuted such that every two consecutive vector rows do not intersect since row permutation has no effect on decoding performance. In the fully pipelined approach the throughput can be nearly doubled relative to the case where there are no paired rows (which requires 2Z) clock cycles to process 2 rows). This is because processing of any subsequent row must be delayed until all messages on the current row being processed have been updated. Therefore the fully pipelined decoding allows significant throughput increase without extra hardware cost. It can be also viewed that the fully pipelined design achieves almost the same throughput as a design that uses twice as much hardware where two vector rows within a group are decoded simultaneously on two processing units. Processing two rows that are non- intersecting can be viewed as Figure 1 with another Figure 1 to the right, with a total delay of 2D.

On the other hand, a hybrid approach is possible if some rows of H b do not have a non-intersecting row. For example, some base matrix rows could be paired with non-intersecting rows and while some could remain unpaired. In this case the throughput could be increased, possibly without the performance penalty of the fully pipelined approach, since the maximum column weight is limited to m \ J2 when fully pipelined decoding is performed. Another approach involves a modification of the decoding paradigm. In this case, the processing of a subsequent row is not delayed until all messages on the current row are updated even if the current row and the subsequent row intersect.

Instead, after the messages for a first row are fed into the parity check circuits the messages for a second row are introduced at a one cycle delay. The performance of this scheme will suffer because the second row does not reap the benefit of any updated messages in the first row. It may be possible to mitigate the performance impact by reducing the intersection between pairs of rows (rather than to absolutely minimize it to no intersection) while achieving the desired error-correcting capability. Thus, a compromise between the error-correcting performance and the decoding speed can be reached.

A further approach involves following the standard decoding paradigm (processing all rows fully before subsequent rows are begun) on early iterations and switching to the modified decoding paradigm discussed above on later iterations. hi the discussion above, group size of 2 is assumed. In general, the base matrix may be designed such that the m t , rows of H b can be divided into m t Jq groups, where the q vector rows within each group do not intersect (called "g-grouping" in the following). When fully pipelined, q vector rows can be started in the pipeline consecutively, with one cycle separation between consecutive vector rows. Thus the q rows can be finished in D+q-l cycles. Thus the throughput of g-grouping design is nearly q times as much as the design where no grouping exists since q vector rows takes D*q cycles to compute. The grouping size q parity-check matrix has a maximum allowed column weight of mjq. Thus q should be selected properly so that good error-correcting performance can be achieved with the maximum allowed column weight.

Although in the preferred embodiment all groups uniformly have q vector rows and the circuitry is fully utilized, it is possible to design an H b where the groups do not have uniform sizes. In one example, floor^A?) groups contain q vector rows in each group, while one group contains rem(m / ,, q) vector rows.

Although the design is discussed under the consideration of layered decoding, a parity-check matrix with q-grouping size q design can be decoded using any other decoding architecture. For example, the belief propagation decoding with flooding scheduling is still applicable, where all check nodes are processed simultaneously, and all variable nodes are processed simultaneously. Code Description

Each of the LDPC codes is a systematic linear block code. Each LDPC code in the set of LDPC codes is defined by a matrix H of size m-by-n, where n is the length of the code and m is the number of parity check bits in the code. The number of systematic bits is k=n-m.

The matrix H is defined as an expansion of a base matrix and can be represented by l 0,0 0,1 l 0,« ft -2 0,« 4 -l

1,0 1,1 1 ,2 l l,n*-2

H = 2,0 2,1 L 2,« 4 -2 = p w *

P X m 4 -1.0 P X Bi 4 -1,1 Bi 4 -l,n 4 -2 where Pj j is one of a set of z-by-z right-shifted identity matrices or a z-by-z zero matrix. The matrix H is expanded from a binary base matrix H b of size /Wb-by-Wb, where n = z - n b and m = z - m b , and z is a positive integer. The base matrix is expanded by replacing each 1 in the base matrix with a z-by-z right-shifted identity matrix, and each 0 with a z-by-z zero matrix. Therefore the design accommodates various packet sizes by varying the submatrix size z.

Because each permutation matrix is specified by a single circular right shift, the binary base matrix information and permutation replacement information can be combined into a single compact model matrix H b m- The model matrix Hb m is the same size as the binary base matrix H b , with each binary entry at (ij) of the base matrix H b replaced to create the model matrix H b m- Each 0 in H b is replaced by a negative value (e.g., by -1) to denote a zxz all-zero matrix, and each 1 in H b is replaced by a circular shift size/?(y)>0. The model matrix H bm can then be directly expanded to H.

The base matrix H b is partitioned into two sections, where H b i corresponds to the systematic bits and H b2 corresponds to the parity-check bits, such that H h = [(H 61 )^ 1 j (H M ) m xm J . Section H b2 is further partitioned into two sections, where vector h b has odd weight, and H b2 has a dual-diagonal structure with matrix elements at row i, column y equal to 1 for i=j, 1 for i=j+l, and 0 elsewhere: H i2 = [h A ! H' W ]

The base matrix has equal to 1. The base matrix structure avoids having multiple weight- 1 columns in the expanded matrix.

In particular, the non-zero submatrices are circularly right shifted identity matrix with a particular circular shift value. Each 1 in H t >2 is assigned a shift size of O, and is replaced by a zxz identity matrix when expanding to H. The two Is located at the top and the bottom of h b are assigned equal shift sizes, and the third 1 in the middle of h b is given an unpaired shift size. The unpaired shift size is equal to 0.

Examples

As an example, code design for 19 code sizes of 576 to 2304 is described. Each base model matrix is designed for a shift size z o =96. A set of shift sizes {p(ij)} is defined for the base model matrix and used for other code sizes of the same rate. For other code sizes, the shift sizes are derived from the base model matrix as follows. For a code size corresponding to expansion factor z/, its shift sizes {p(f, i, /)}are derived from {p(ij)} by scaling p(ij) proportionally,

Note that cc f = z o / z / an d LxJ denotes the flooring function which gives the nearest integer towards -∞.

The model matrices are tabulated below for three code rates of Vi, 2/3 and 3/4. The design for rate 1 A have 6 groups with group size q = 2. The design for rate 2/3 have 4 groups with group size q - 2. The design for rate 3 A does not use the non- intersecting construction in order to achieve good performance.

Rate 1/2:

The base model matrix has size « b = 24, m b =12 and an expansion factor zo=96 (i.e., «=24*96=2304). To achieve other code sizes n, the expansion factor Z f is equal

-1 -1 34 -1 -1 59 -1 -1 68 -1 25 -1 7 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1

-1 89 -1 -1 -1 -1 -1 62 -1 -1 49 82 -1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1

58 -1 -1 20 -1 -1 -1 11 -1 81 -1 16 -1 -1 0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 86 -1 -1 -1 53 -1 -1 69 49 -1 -1 -1 -1 0 0 -1 -1 -1 -1 -1 -1 -1

-1 -1 -1 -1 -1 -1 -1 16 10 -1 -1 6 0 -1 -1 -1 0 0 -1 -1 -1 -1 -1 -1

-1 -1 -1 -1 49 66 -1 92 -1 -1 61 47 -1 -1 -1 -1 -1 0 0 -1 -1 -1 -1 -1

-1 13 -1 -1 -1 -1 15 33 -1 71 -1 65 -1 -1 -1 -1 -1 -1 0 0 -1 -1 -1 -1

34 -1 -1 -1 -1 48 -1 -1 19 28 -1 ' -1 -1 -1 -1 -1 -1 -1 -1 0 0 -1 -1 -1 -1 -1 58 -1 75 -1 -1 -1 64 -1 68 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 0 -1 -1

-1 -1 -1 16 -1 -1 -1 92 47 -1 -1 64 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 0 -1

-1 29 -1 -1 9 -1 -1 -1 -1 28 59 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0 0

-1 -1 -1 38 -1 -1 83 -1 50 86 -1 -1 7 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 0

Rate 2/3:

The base model matrix has size « b =24, (i.e., «=24*96=2304). To achieve other code sizes «, the expansion factor Z f is equal to «/24. 56 -1 -1 54 -1 75 -1 82 93 -1 -1 49 -1 3 83 -1 7 0 -1 -1 -1 -1 -1 -1

-1 47 36 -1 -1 4 62 -1 14 -1 -1 37 63 -1 -1 11 -1 0 0 -1 -1 -1 -1 -1

-1 61 -1 37 -1 -1 84 -1 54 -1 2 93 -1 23 -1 79 0 -1 0 0 -1 -1 -1 -1

84 -1 -1 77 -1 80 -1 31 78 -1 9 -1 65 -1 -1 58 -1 -1 -1 0 0 -1 -1 -1

-1 55 40 -1 8 -1 13 -1 -1 79 60 -1 95 -1 -1 30 -1 -1 -1 -1 0 0 -1 -1 11 -1 -1 45 0 -1 -1 10 -1 13 21 -1 -1 70 86 -1 -1 -1 -1 -1 -1 0 0 -1

35 -1 6 -1 16 40 -1 30 -1 57 -1 -1 89 -1 74 -1 -1 -1 -1 -1 -1 -1 0 0

-1 89 95 -1 77 -1 56 -1 -1 74 -1 14 -1 78 14 -1 7 -1 -1 -1 -1 -1 -1 0

Rate 3/4: The base matrix has size « b =24, m b =6 and an expansion factor z o =96 (i.e.,

«=24*96=2304). To achieve other code sizes «, the expansion factor Z f is equal to

«/24.

43 90 41 40 19 -1 -1 -1 -1 86 -1 83 26 74 50 -1 -1 62 7 0 -1 -1 -1 -1

-1 -1 95 61 84 2 16 -1 -1 0 -1 -1 -1 20 30 91 18 95 -1 0 0 -1 -1 -1 -1 -1 -1 87 0 -1 58 16 -1 87 16 -1 -1 93 -1 54 24 33 0 -1 0 0 -1 -1

-1 12 -1 -1 65 48 -1 10 10 95 -1 49 -1 52 6 -1 36 57 -1 -1 -1 0 0 -1

65 -1 31 -1 15 -1 12 -1 6 57 0 89 9 29 -1 -1 -1 75 -1 -1 -1 -1 0 0

-1 65 -1 -1 48 40 -1 83 18 45 29 -1 73 84 -1 77 -1 95 7 -1 -1 -1 -1 0

Performance

Performance of the updated Motorola design for 802.16e in AWGN channel is shown in Figure 2, 3, and 4 for rate {1/2, 2/3, 3/4}. QPSK modulation was used. The block sizes N range from 576 to 2304 for all three code rates. The expansion factor z ranges from 24 to 96, as shown in the figures. The block size and the expansion factor are related by « = 24*z.

While the invention has been particularly shown and described with reference to a particular embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention. It is intended that such changes come within the scope of the following claims.