Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMBINED BLOCK-SYMBOL ERROR CORRECTION
Document Type and Number:
WIPO Patent Application WO/2014/070171
Kind Code:
A1
Abstract:
In a method for coding information using a coding scheme, a horizontal code is selected. Additionally, a matrix is selected. Encoding information symbols into an array based upon the selected horizontal code is performed. Moreover, encoding the columns of the array based upon the selected matrix is performed.

Inventors:
ROTH RON M (US)
VONTOBEL PASCAL OLIVIER (US)
Application Number:
PCT/US2012/062835
Publication Date:
May 08, 2014
Filing Date:
October 31, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO (US)
International Classes:
H03M13/05
Foreign References:
US20100218037A12010-08-26
US20100100790A12010-04-22
US20070277075A12007-11-29
US7634704B22009-12-15
US20050219070A12005-10-06
Other References:
BLAUM M ET AL.: "IEEE TRANSACTIONS ON INFORMATION THEORY", vol. 39, 1 January 1993, IEEE PRESS, article "NEW ARRAY CODES FOR MULTIPLE PHASED BURST CORRECTION", pages: 66 - 77
METZNER J J ET AL.: "IEEE TRANSACTIONS ON INFORMATION THEORY", vol. 36, 1 July 1990, IEEE PRESS, article "A GENERAL DECODING TECHNIQUE APPLICABLE TO REPLICATED FILE DISAGREEMENT LOCATION AND CONCATENATED CODE DECODING", pages: 911 - 917
TOM KOLAN ET AL.: "PROC. 2012 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT", 1 July 2012, IEEE, article "Burst list decoding of interleaved Reed-Solomon codes", pages: 81 - 85
CHRISTOPH HASLACH ET AL.: "IEEE TRANSACTIONS ON INFORMATION THEORY", vol. 45, 1 November 1999, IEEE PRESS, article "A Decoding Algorithm with Restrictions for Array Codes"
ZHANG JIANWEN ET AL.: "2005 FIFTH INTERNATIONAL CONFERENCE ON INFORMATION, COMMUNICATIONS AND SIGNAL PROCESSING, BANGKOK", 6 December 2005, IEEE, article "On Transformed Folded Shortened Reed-Solomon Codes for the Correction of Phased Bursts", pages: 1374 - 1378
Attorney, Agent or Firm:
CHANG, Marcia Ramos et al. (Intellectual Property Administration3404 E. Harmony Road, MS 3, Fort Collins Colorado, US)
Download PDF:
Claims:
What is claimed is:

1. A method for encoding information symbols using a coding scheme, the method comprising:

selecting a horizontal code from a plurality of codes, wherein said horizontal code are linear codes over a field;

selecting a prescribed length and a prescribed height, wherein said matrix is selected from a plurality of matrices over said field, wherein said matrix comprises a number of rows equaling said prescribed height and a number of columns equaling said prescribed length multiplied by said prescribed height, wherein all column subsets of size less than a prescribed number within said matrix are linearly independent, and wherein a number of square sub-matrices formed by partitioning the column set of said matrix into a number of non-overlapping column subsets are invertibie over said field;

encoding said information symbols into an array based upon said selected horizontal code; and

encoding said columns of said array based upon said selected matrix,

2. The method of Claim 2, wherein one encoding step constitutes a code which is a prescribed level of interleaving of said horizontal code and consists of arrays such that each row of said arrays belongs to said horizontal code.

3. The method of Claim 1 , wherein said horizontal code has a prescribed minimum distance.

4. The method of Claim 1 , wherein said method for encoding information using a coding scheme is operable to correct phased burst errors and symbol errors.

5. A method for communicating information reliably, the method comprising: transmitting a transmitted array of information symbols; receiving a received array of encoded symbols, wherein said received array is corrupted by a first type of error, a second type of error, a third type of error, and a fourth type of error, wherein said first type of error is a block error, said second type of error is a block erasure, said third type of error is a symbol error, and said fourth type of error is a symbol erasure:

decoding said received array of encoded symbols based at least on said corrupted array.

6. A method for encoding and decoding a code, the method comprising:

selecting a horizontal code from a plurality of codes, wherein said horizontal codes are linear codes over a field;

selecting a prescribed length and a prescribed height, wherein said matrix is selected from a plurality of matrices over said field, wherein said matrix comprises a number of rows equaling said prescribed height and a number of columns equaling said prescribed length multiplied by said prescribed height, wherein all column subsets of size less than a prescribed number within said matrix are linearly independent, and wherein a number of square sub-matrices formed by partitioning the column set of said matrix into a number of non-overlapping column subsets are invertib!e over said field;

encoding said information symbols into an array based upon said selected horizontal code; and

encoding said columns of said array based upon said selected matrix.

7. The method of Claim 8 wherein said horizontal code is a generalized Reed- Solomon code.

8. The method of Claim 8, further comprising:

computing a syndrome array;

computing a modified syndrome array; 48

applying a decoder for said horizontal code based on at least on said syndrome array;

decoding said received array of information symbols by applying said error array to said received array of encoded symbols.

9. The method of Claim 8, further comprising:

computing a row in a matrix;

applying a decoder to a horizontal array based at least on a said syndrome array and a row in said matrix; and

updating said received array and said syndrome array

10. The method of Claim 8, further comprising:

computing a syndrome array;

computing a second matrix;

computing a polynomial using a Feng-Tzeng operation;

provided said Feng-Tzeng operation is successful, computing an error array; and

decoding said received array of information symbols by applying said error array to said received array of encoded symbols.

1 1. The method of Claim 10, further comprising:

computing a greatest common divisor based at least on the left kernel of said second matrix;

computing a root sub-set and a polynomial; and

computing said second matrix.

12. The method of Claim 10, further comprising:

computing a shortest linear recurrence of any nonzero column in said second matrix;

computing said root sub-set; and updating said root sub-set.

13. The method of Claim 10, further comprising:

computing a modified syndrome array:

computing a polynomial using a Feng-Tzeng operation;

provided said Feng-Tzeng operation is successful, computing an error array; applying a decoder for said horizontal code;

provided applying said decoder to said horizontal code is successful, updating said corrupted array.

Description:
COMBINED BLOCK-SYMBOL ERROR CORRECTION

BACKGROUND

[0001] In coding theory, concatenated codes are a class of error-correcting codes that are derived by combining an inner code and an outer code. Concatenated codes allow for the handling of symbol errors and erasures, and phased burst errors and erasures. However, many applications require a reduced number of parity symbols compared to those provided by concatenated codes.

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] The accompanying drawings, which are incorporated in and form a part of this specification, illustrate and serve to explain the principles of embodiments in conjunction with the description. Unless specifically noted, the drawings referred to in this description should be understood as not being drawn to scale.

[0003] Figure 1 shows a block diagram of ingredients of a coding scheme, in accordance with one embodiment.

[0004] Figure 2A shows a diagram of an example array of information symbols, in accordance with one embodiment.

[0005] Figure 2B shows a diagram of an example encoded array comprising codeword symbols, in accordance with one embodiment.

[0006] Figure 2C shows a diagram of a corrupted array of encoded information symbols, in accordance with one embodiment.

[0007] Figure 3 is a flowchart of a method of encoding information using a coding scheme, in accordance with one embodiment.

[0008] Figure 4 is a flowchart of a method of communicating information reliably, in accordance with one embodiment.

[0009] Figures 5A-5B are example block diagrams of a method of encoding and decoding using a code, in accordance with one embodiment.

[0010] Figure 8 is a block diagram of a system used in accordance with one embodiment. DESCRIPTION OF EMBODIMENTS

[0011] Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. While the subject matter will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the subject matter to these embodiments. Furthermore, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. In other instances, conventional methods, procedures, objects, and circuits have not been described in detail as not to unnecessarily obscure aspects of the subject matter.

Notation and Nomenclature

[0012]Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities.

Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.

[0013] It should be borne in mind, however, that ail of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present discussions terms such as "selecting", "encoding", "transmitting", "receiving", "computing", "applying", "decoding", "updating", or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.

[0014] Furthermore, in some embodiments, methods described herein can be carried out by a computer-usable storage medium having instructions embodied therein that when executed cause a computer system to perform the methods described herein.

Overview of Discussion

[0015] Example techniques, devices, systems, and methods for implementing a coding scheme are described herein. Discussion begins with a brief overview of a coding scheme, and how it addresses phase burst errors and erasures and symbol burst errors and erasures. Next, encoding using the coding scheme is described. Discussion continues with various embodiments used to decode the coding scheme. Next, several example methods of use are described. Lastly, an example computer environment is described.

Coding Scheme

[0016] Transmission and storage systems suffer from different types of errors contemporaneously. For example, a memory cell in a data storage system may be altered by an alpha particle that hits the memory cell. In some cases entire blocks of memory cells may become unreliable due to the degradation of hardware. Such data transmission and data storage systems can be viewed as channels that introduce symbol errors and block errors, where block errors encompass a plurality of contiguous information symbols. It should be understood that as discussed herein, the terms phased burst errors and block errors may be used

interchangeably. Moreover, if additional information (e.g., side information) is available, for instance based on previously observed erroneous behavior of a memory ceil or cells, a symbol erasure or block erasure is modeled. In an embodiment, erasures differ from errors in that a location of an erasure is known while the location of an error is not. In various embodiments described herein, a coding scheme is operable to perform the task of a concatenated code using fewer parity symbols than a concatenated coding scheme performing the same task.

[0017] Figure 1 shows an example coding scheme 100 comprising a horizontal code (C) and a matrix 130 (H in ). Matrix 130 comprises a plurality of sub-matrices 135 (i.e., 135-1 , 135-2, 135-n). An outer encoder is derived from the code C, and an inner encoder is derived from the matrix H in from which the vertical encoder. In some examples, these ingredients (i.e., C and H in ) and the corresponding encoders are determined off-line and fixed. In an embodiment, code C comprises the parameters n, k, and d, wherein n is the block length of the code C, k is the dimension of C (namely, the number of information symbols, not including the parity symbols), and d is the minimum Hamming distance of code C.

[0018] Figure 2A shows example information symbols 210 comprised within small squares in an array 205. In the example arrays described herein, every small square corresponds to an information symbol in /· ' G¥(q), where q is an arbitrary prime power and GF(q) is the Galois field with q elements. In various examples, q is a small power of 2. In an embodiment, small squares are arranged in the shape of an m x k rectangular array 205.

[0019] Figure 2B shows an encoded array 206 (Γ) of size m x n. In an example, once a code and a matrix 130 have been selected, encoding information symbols may begin using an encoder. As a result, the resulting symbols are referred to as codeword symbols 21 1. The encoding procedure contains two steps, an outer (also referred to herein as horizontal) encoding step and an inner (also referred to herein as vertical) encoding step. In the outer (also referred to herein as

horizontal) encoding step, for every j === 1 , , , ,, m, the k symbols in the j-th row of the first array 205 are encoded with the help of a horizontal encoder for code C; the resulting n symbols are placed in the j-th row of the encoded array 206. In the vertical encoding step, for every i === I, n, the m symbols in the i-th column are encoded by a bijective mapping derived from the i-th sub-block of H, n ; the resulting m symbols are placed in the i-th column of the third array 207.

[0020] Figure 2C shows a corrupted array 200 (also referred to as Y) of size m x n which is created when encoded array 208 has passed through a channel that introduced errors into encoded array 208. There are various types of errors. As an example, a symbol error 220 occurs when the content of a small square is altered. A block error 230 (also referred to as a phased burst error) occurs when a plurality of small squares in a column 260 of an array 200 are altered. As similar examples, a symbol erasure 240 occurs when the content of a small square is erased, and a block erasure occurs when a plurality of small squares in a column 280 of an array 200 are erased.

[0021] Note that the terms horizontal and vertical (and columns and rows) are terms used to describe a visualization of an array or matrix and may be

interchanged (i.e., the visualization of an array may be turned on its side). In various examples decoders may be selected for combinations of errors (220, 230, 240 and 250) that are more efficient than a corresponding decoder for a suitably chosen Reed-Solomon code of length mn over F. Encoding

[0022] In various embodiments, encoding is performed on information symbols 210 by applying a coding scheme 100 to information symbols 210. In describing the coding scheme 100, it is necessary to describe the channel model and code definition. In an example channel model, an m x n stored (also referred to herein as transmitted or encoded) array 206 (Γ) over F is subject to symbol errors 220, block errors 230, symbol erasures 240, and block erasures 250.

[0023] In one example, block errors 230 (also referred to as error type (T1 )) are a subset of columns 260 in array 200 that may be indexed by Equation 1

] where n ) denotes the set of integers {0, 1 , n-1) , and (a , b ) denotes the set of integers {a, a + 1, a + 2, b - 1} .

[0025] In one example, block erasures 250 (also referred to as (error type (T2)) are a subset of columns 260 in array 200 that may be indexed by Equation 2

[0026] In one example, symbol errors 220 (also referred to as error type (T3)) are a subset of symbols 210 in array 200 that may be indexed by-

Equation 3 [0027] In one example, symbol erasures 240 (also referred to as (error type (T4)) are a subset of symbols 210 in array 200 that may be indexed by

R c ({m ) x ({n ) \ K )) \ L . Equation 4

[0028] An error matrix (ε) over represents the alterations that have occurred on encoded array 208 (e.g., alterations that may have occurred during transmission). The received array 200 (referred to herein as Y, or the corrupted message) to be decoded is given by the m x n matrix:

Y = Γ + ε Equation 5

[0029] In such an example, erasures are seen as errors with the additional side information K and R indicating the location of these errors.

[0030] In an example: τ - \ T I, p - \ K I, ϋ - \ L \, and ρ - \ R \ Equation 8

[0031] In other words,

Table 1 : Types of Errors and Erasures Under Consideration

Error Erasure

(T1 ) (T2) Block column 260 set J column 260 set K

\ J j = T \ K |= p

(T3) (T4)

Symbol location set I, location set R

\ L \= ΰ \ R \= ρ

[0032] The total number of symbol errors 220 (resulting from error types (T1 ) and (T3)) is at most m τ + ϋ and the total number of symbol erasures (resulting from erasure types (T2) and (T4)) is at most m p + ρ. Thus, all error and erasure types (220, 230, 240 and 250 or (T1 ), (T2), (T3) and (T4)) can be corrected (while occurring simultaneously) while using a code of length m x n of F with a minimum distance of at least m ( 2 T + p ) + 2 i) 1 + ρ + ί . Equation 7

[0033] In an example, the code (C) is a linear code (with parameters [n, k, d\) over F. Matrix 130 (¾) is an m x (mn) matrix over F that satisfies the following two properties for a positive integer (δ):

(a) Every subset of δ - 1 columns in H^ is linearly independent (i.e., H ia is a parity-check matrix of a linear code over F of length m x n and minimum distance of at least δ) and

Equation 8 with Ho, Hi , . . . , H -i being m x m sub-matrices of wherein each H m - is invertibie over F.

[0034] In an example, a codeword is defined to be an m x n encoded matrix (Γ)

= ( o I 3 i ... i f i-i ) Equation 9

[0035] over F (where , stands for coiumn j of Γ) such that each row in

/. { !/,-. o i H 0 1 1 ... i //, I ) Equation 10

[0036] is a codeword of C (horizontal code 120).

[0037] In an example, the code C is an w-levei interleaving of a horizontal code 120 (C), such that an m x n matrix

Z ( Zo ! Z 3 ! ... j Z, , ) Equation 1 1

[0038] over F is a codeword if each row in Z belongs to C. Each column in Z then undergoes encoding by an inner encoder of rate one, wherein the encoder of column / is given by the bijective mapping Z,→ Fl z,.

Decoding

[0039] This section will address a plurality of decoders. First, a polynomial-time decoding process for all errors and erasures is presented. Next, specialized decoders are presented. The first specialized decoder corrects (T1 ), (T2), and (T4) errors and erasures but not (T3) (i.e., symbol erasure 240) errors. By defining the encoder with the help of C and H jfl , and using the decoders described herein the decoding complexity scales linearly with n 3 . In various examples, parameters such as m scale with n.

A. Polynomial-Time Decoding

[0040] In an embodiment, the horizontal code 120 (Q is a Generalized Reed- Solomon (GRS) code over F and H m is an arbitrary m x (mn) matrix over F that satisfies two properties:

(a) Every subset of δ - 1 columns in H io is linearly independent (i.e., H ia is a parity-check matrix of a linear code over F of length m x n and minimum distance of at least δ) and

(b)

H in == ( Ho ! Hi I . . . ! //.·· I ) Equation 12

[0041] with Ho, Hi , . . . , Hn- i being m x m sub-matrices of H in , wherein each H io is invertibie over ,

[0042] Columns of m x n arrays may be regarded as elements of the extension field (sFu/Ί (according to some basis of G¥(tf) over F). In this example, the matrix Z is a codeword of a GRS code (referred to as C) over GF(q m ), where C has the same code locators as a code C .

[0043] In an example, Γ is referred to as a codeword and is transmitted as an m x n array. In this example, Y is the received m x n array 200, which may have been corrupted by τ errors of type (T1 ) (block errors 230) and Θ errors of type (T3) (symbol errors 220), wherein

I < ! 2) \ Equation 13 [0044] (where d is the minimum distance of an horizontal code 120 ( ) as discussed below) and

9<(δ - l)/2 Equation 14

Ό045] First an array 200 anmxn array is computed:

Y ( Ho o ! Ho i Hn-l n-\ ) Equation 15

[0046] where T 200 and Y each contains τ + 9 < (d + δ - 3)/2 erroneous columns, in other words, Fis a corrupted version of a codeword of C. In one example a list decoder can be applied for C to Y. In various examples, a list decoder returns a list of up to a prescribed number (herein referred to as€) of codewords of C and the returned list is guaranteed to contain the correct codeword Γ, provided that the number of erroneous columns 280 in Y 200 does not exceed the decoding radius of C, which is j >/( ) ..{</ u)\ - 1, where -id n) is the maximum overs £ {1,2,..., £} of the following expression:

Equation 18

O047]Thus, if t is such that

;?θ i d I n ) ≥ {d + δ - 1) / 2 Equation 17

OQ48Jthen the returned list will contain the correct codeword:

Z (Ho o ! Ho ί Hn-l n-\ ) Equation 18 [0049] of C For each array Z' in the list the respective array 206 can be computed,

' = ( / /' :. '' /:, -. \ / , '"/: . I . . . I Η η ..{ ι Ζ Λ ). Equation 19

[0050] Only one namely, the transmitted array , can correspond to an error pattern of up to (d/2) - I block errors and up to (δ - i)/2 symbol errors. In other words, ' can be computed by checking each computed Z' against the received array Y 200.

[0051] In some examples, the coding scheme 100 can be generalized to handle (T2) and (T4) errors (i.e., block erasures 250 and symbol erasures 240) by applying a list decoder for the GRS code obtained by puncturing C ' to the columns 260 that are affected by erasures. To perform this, the minimum distance (d) is replaced with d - p - ρ.

B. Decoding (TP, (T2), and (T4) Errors and Erasures but not (T3) (e.g., biock errors 230. block erasures 250. and symbol erasures 240 but not symbol errors 220).

[0052] In one example, a code C is selected for the case where there are no (T3) errors (i.e., there are no symbol erasures 240, or j L |= ϋ = 0 . Equation 20

0053] In an example, an m x n matrix Γ 206 is transmitted and an m x n matrix

= Γ + € Equation 21 O054]is received, where .j ) s-ε {,·,·, ). ; (: } Equation 22

[0055] is an m x n error matrix, with T (<z (n >) (and thus where K (c )) indexing the columns in which block errors (respectively, block erasures) have occurred, and R (c </w )x )) is a nonempty set of positions where symbol erasures have occurred. In some examples it is assumed that d. r { \T]), and ( :: K) satisfy

2τ + p < d - 2 Equation 23

O056]and that ρ i jRj) satisfies

0 < ρ < m Equation 24

Ό057]Ιη this example Y 200 and E are defined as

Y ( o o|Ho 1 H-1 Equation 25

[00581 and

E = ( h,j) he ( m ), e <») = (■¾ olHo i| ... |H„-i „-i ) Equation 2

0059] Thus,

Y = Z + E Equation 27 [0060] where Z is given by

Z= ( Ho o I Ho i i ... i ¾-i n-\ ) Equation 28

[0061] as discussed above. In particular, in this example, every row of Z is a codeword of a horizontal code 120 taken to be a Generalized Reed-Solomon (GRS) code C = C G RS, the latter being a linear code over F which is defined by the parity-check matrix J¾RS ~ ( a ) ) ie ^ : > . , („ , * where a 0 , a 0i ..., α -ι are distinct elements of F.

[0062] Next, denote the elements of /? by

R = Equation 29

[0063] In an example, some I e. < o>, and the ρ univariate polynomial (of degree o - 1) is defined by

Equation 30

[0064] where , y are distinct and nonzero in F for ail κ. £ m ) and , / e (n ; the respective matrix H m ~ ( H } ) he m ^ is then a parity check matrix of an ! mn , m ( n - 1 ), m + 1 j GRS code of F, and where 18

Equation 31

[0085] denote row ρ - 1 of the (m + ρ - 1) x n matrix whose entries are given by the coefficients of the bivariate polynomial product B ' *' ' ( y ) E ( y , x ), where E(y, x) is the bivariate polynomial in x and>' with coefficient of Vx being the entry of E that is indexed by (/ ). As an example,

supp( ( °) ς T K {j e } e< 6 > Equation 32

[0066] The contribution of a symbol erasure at position (K ) in ε to the column E } {y) of E(y, x) is an additive term of the form

£ ¾ _ . m ( y ; β K j ) = ε k . ^ .. ·— ~~ Equation 33

V

Ό067] where for an element ξ e≡Fthe polynomial T w (ν, ' ζ) is defined as

ζ y Equation 34

[0068] So, if

(K~ , j)≠ {K f , j f ) Equation 35

[0069] then the product

Equation 38

0070] is a polynomial in which the powers v '.y ' ..., ™" ' have zero coefficients

O071]At this point in the process, every row in the (m + ρ ~ ϊ) η array

Z >) (y, x) - B - '( r }Z ( r. x) Equation 37

[0072] is a codeword of C'G S, where Z(y, x) is the bivariate polynomial in x and y with coefficient of yV being the entry of Z that is indexed by (, ). Therefore, by applying a decoder for C G RS to row o - 1 of Z (i) with p + 1 erasures indexed

K u { }, the vector may be decoded.

[0073 in this example, it follows from the definition oft? (t ^ that for every

Equation 38

[0074] !n particular,

Equation 39

[0075] Because i e < o> was arbitrary, the error values in ε at the positions R may be recovered. I.e.,

.. (£} p f Λ

Equation 40

[0076]As such, in this example, symbol erasures may be eliminated from E.

[0077] In an example, Table 2 summarizes the process described above for a decoding process for (T1 ), (T2), and (T4) (i.e., block errors 230, block erasures 250, and symbol erasures 240).

Table 2: Decoding (T1), (T2), and (T3) Errors and Erasures but not (T3)

* Array T of size x n over F.

* Set JC of indexes of eoismB erasures.

* Set ' R■■■■ | i£-ss j» xx > x of posiiioas of symbol erssares...

Steps:

1) Compufe the Til (d. I) sysdroirie array

·' · ' - ' - ' - ' ( o [ ' .iTs. I ... I .¾.-ΐΤ -ί I/Q S H ,

2} Com ute Hie modifies!, syndrome array to be & unque ø x (d 1} mate σ feat satisfie fee caa ruiice

≡ i^ ) II [I (mod " i }- .

3) or every έ G do .

a) C m ute raw § 1 m fee umque § x (d I.) .matrix 1 *··* that atifies fee con ruence

^ (y.x)≡B ie) (y)<r(y^)(l Jf x)

(mod i-x - .. )) . where B ¾ ¾ s as is (30)„

b} Decode (^e,-. estry j$ eS^) by applymg s decoder for CQRS m raw Q ····· 1 m <r i fi> as sym!mme aiid asinsmg fea CO&SMIS indexed by K u {j. } are erased.

Compute ··· ^ ' · " ,>,·

c) Update fee received array T and fee seadr me arrsv . : > y

4) For e ry /?, E {«*)„ apply .a decoder for £¾B . S; ' m ig ro ft. of .V as syndrome and assuming tfeat columns indexed by C ar erased. Let Ix be fee m x n matrix wliose raws are fee decoded error vectors for all ft. & (ra).

5) Comp te fee error nsmj

{ .H e : /··:;;: j / » J < X f¾ [ ... j 5 "· ,, . · } ■

Oatpat;

* Decoded array T -----€ of srz ·*,· :· x: ?x C. Decoding (TP. (T2). and (T4) Errors and Erasures and with Restrictions on Errors of Type (T3¾ (e.g., block errors 230, block erasures 250, and symbol erasures 240 and with restrictions on symbol errors 220).

[0078] In one example, a code C is selected (e.g., the code C is guaranteed to work) for the case where there are some restrictions on the symbol error 220 positions ((T3) errors), wherein an example, each column, except for possibly one, contains at most one symbol error. The positions of these errors are determined, thereby reducing the decoding to the case described in Section B, above. These restrictions always hold when j L j< 3 and d is sufficiently large

[0079] In an example, the same notation is used except: (1 ) the set I is not necessarily empty; and (2) R is empty. As above, the number of block errors 230 (r) and the number of block erasures 250 (p) satisfy

2 τ + Equation 41 n an example, when L !> 0 , Equation 42

;0081] and i- ■■■■■■■ i ( K t , J e )} (e ^ Equation 43

[0082] In an example, there exists a w e (ΰ- ) such that the values / 0 ,./i, ...,j w are all distinct, while 0083] In this example, & and w satisfies the inequalities i ^ m

is ≤ — ~ Equation 45

[0084] and w + + p ≤ d - 2. Equation 48

[0085] In other words, the number of erroneous columns does not exceed d-~ 1.

[0086] In an example, £ Kth ≠ 0 for every ί e (tf) . The set {j t } (e ( w + l \ will be denoted herein as ', When ϋ = 0 , w is defined to be 0 and L ' to be the empty set

[0087] In an example, the modified syndrome a is the m x (d - i ) matrix that satisfies )(mod . ;; ! ).

Equation 47

[0088] and -S is the x ( d ■■■■ 1 -· ? ) matrix formed by the columns of σ that are indexed by { > , d ■■■■ 1 ) . Note that μ = rank (-5) = rank ( (E ) t , , L , ). [0089] If μ ≥ 2 w + 2 , then the columns that are indexed by Z' are full block errors (230) (i.e., errors of type (T1)), and

2(T + w + \) + p ≤ d + /i - 2. Equation 48

[0090] In an example, μ < 2 w + 1. Equation 49

[0091] For every j e T L ' , column E„ namely, the column of E that is indexed by./, belongs to co!span^S), where, coispan( ) is the vector space spanned by the columns of the array X. This holds for ./ e L { j w } , in which case E: (in polynomial notation) takes the form

E j ( y ) = e K · T m ( y ; β K , ). Equation 50

[0092] In an example, the row vectors a 0 , a ,,,, α . Λ form a basis of the dual space of co!spani-S), and for every e (m ■■■■ μ ) , a ; (x) denotes herein the polynomial of degree less than m with coefficient vector Note that a ( y ) = gcd( a 0 ( y ), a , ( y ) a m ... u .. , ( y ));

Equation 51

[0093] since the a s are linearly independent, deg a ( y ) < μ. Equation 52 [0094] !n other words, a( ) has at most / (≤ 2w + 1) distinct roots in . For every ζ e F , the column vector {ζ 11 \ e ^ m \ (also represented as T, ; { ·:.}} belongs to coispan(->S ! ) (and, hence, to colspan (E) T J L , ), if and only if E is a root of a(y). In particular, β K> _ ji is a root of a(y) for every I e (\v ) . The root subset

R - {( A: , j) : α i/2 K .. ) - 0} Equation 53

[0095] is denoted herein by R, and the polynomial A( ) is defined by

A ( ) =∑ Λ · ν = Π ( ] ~ / ?? v..- . r )· Equation 54

[0096] where η =\ R \.

[0097] In an example, the (m - η ) x n matrix E= .¾--»,, ^ < » > which is formed by the rows A(y)E(y,x) that are indexed by (rn - 7] ) . Including,

= ∑ A,. ,,A e (m - η), j <z (n)

Equation 55

[0098] S is the (m - η)χ (d - 1 - p ) matrix formed by the rows of A{y)S(y r x) that are indexed by { , ¾), Therefore, (') ------- 0 for i e {¾ > and

E , ( r ) - Equation 58

[0099] The number of summands on the right side of Equation 58 is .9 - w, and that number is bounded from above by m - 2 w - 1 < m ~ μ ≤ m - η . This means that E jw (y) = 0 if and only if Α(β . ) = 0for all £ e ( , ι . Moreover,

rank(~5) = rank (( E) ) = μ - η . Equation 57

[00100] Next, the following three cases are distinguished. 1 - Case 1 : ;/ /;

[00101] According to example Equation 57, ¾ >v (y) = 0, which is equivalent to having A( :/ ,. ' ; ) = 0 for all I e Thus, L ς R , and the decoding is then reduced to the case described in Section B, above.

2. Case 2: n = u - 1

[00102] If E jw (y} = 0 then L c R . Otherwise (according to Equation 57), each column in must be a scalar multiple of /· ' ,., . Note that herein E and E (as used in some equations), are used interchangeably. The entries of E jw , in turn, form a sequence that satisfies the (shortest) linear recurrence

\R Ί

B ( y ) = ∑ n X = Yl ( l - Pr .j y X Equation 58

Ό0103] where R ' = (xr e , j : i e (w , ΰ- ) and A ( β ~ ( j w ) ≠ 0}.

Equation 59

[00104] This recurrence is uniquely determined, since the number of entries in E jW , which is m - ?] = m - μ + 1 > m - 2 vv , is at least twice the degree i R Ί (< ϋ ■■■■ w ) of B( ). The recurrence can be computed from any nonzero column of

[0100] From there, L ( z R u R is derived, where R'\=\ R \ + I R '\

≤ η + i) 1 - w

≤ 2 w + i9 - w

< ϋ - w

≤ rn Equation 80

[0101] Once again decoding can be reduced to the case in Section B, above. 3- Case 3: η < u -- 2

[0102] If £ / „,(y) = 0 then (again) L c R . Hence, E can be decoded. As shown in Equation 56, j = j w , the vector E, ( ) can be referred to as a syndrome of the column vector

£ y (y) = J F ^ j y Equation 81

re {m ):A { β ΐ- 1 j )≠ 0

103] with respect to the following parity-check matrix of a GRS code: 28

H GRs = (.v K ,j K n )h* {m - ),Ke m Equation 82

[0104] where

:: β Αίΐ : ) if A i/ . )≠ o,

1 otherwise

Equation 83

[0105] Since the Hamming weight of £ is at most ϋ ■■■■ w < (m - η ) I 2 ,

£ j can be decoded uniquely from E } . Thus, for every κ such that A( β ~ ■ )≠ 0 , an error value £ Kj} - is derived, and subtracted from the respective entry of T, thereby making R a superset of the remaining symbol errors 220. In an example the above process is applied to every nonzero column in E with index j £ K . A decoding failure means that / is not j w , and a decoding success for ./ ≠ j w will just cause a coding scheme 100 to incorrectly change already corrupted columns in Y, without introducing new erroneous columns. Again, decoding of Y may proceed as in Section B, above.

[0106] Table 3 presents the implied decoding system of a combination of errors of the type (T1), (T2), and (T3) (block errors 230, block erasures 250, and symbol errors 220) provided that the type (T3) errors (symbol errors 220) satisfy

requirements (a) and (b) above, including Equation 7. As discussed above, these equations hold when m ≤ d - p and the number of type (T3) errors (symbol errors 220) is at most 3. Table 3: Decoding (T1 ), (T2), and (T4) Errors and Erasures, and with restrictions on the number of (T3) errors. For the sake of simplicity, there are no (T4) Errors

* Ansy X of size > ; x ?* owr F

* Set JC of iadeses of colusm e sssres.

1) Cessspae &e -m x s¾Kfc&aie wmj

S■-, Η&Ύ& ! ΪΙ Ύχ I ... I //,. ;T ;J - ).H¾¾s .

2} Compsfc the m x (d—l—p) ssa&ix formed ¾¾' ise cdkisms

§ - 'Xj-ss) t at- as sisfesd .by (p*d~ I). Let

3} (Atos S: to amend sssuaaag j£x < .ί*/2.) Agfs!y Steps 3~

4 m ss ® 4 ( ttts K ) to Sfes s¾o«Mfk¾l syiidrame may

,ΪΑ to roduce ass emsr a ray A If decoding is .sssccexsfk, go so Ste 8.

4) &> C«s «ie the greatest amm tlsxisoi «: ¾f¾ of a basis of the left kernel of S.

fc) Oerte the set ¾ w& t s pohr mmi Α(· as ia (35V- (3<5 . Ler rt ii^ '

c) Compute the x [d- ! ···/>; aiaUi S fc smssd by

the rows of A(¾i}S(¾i, «.· ' tha are mdjsesf by mj.

5} If η ------ μ - 1 dien do;

a) Cksmpiite fee sSbcatsst: !iaear ssc smcs «f ¾sy

mzsss zo&m is A.

) Co p te Lhe set.

¾ f ... (i^ j) : A(A¾> A 0 ^Ki BO ~ j)■■■ 6} ..

If s - d«isB|¾ asd !jAil < ?¾ - ttasi is date

6) Else if > : = < μ -- 2 fern do

&) Apply Steps 2-4 ia Tsbis 4 (wife K ------ A) to the

s)¾feme. ssrs , o duce as error array Ii:.

b) or

i) ii)

?} Apply Steps 2-4 m Figure 3 to S, AI 71-, sroiface ssi esTor stray i?.

i) oss iifc the€S3or assay

- ( i ii', ' A, I . . J iA ; xi... : ) .

¾£«pai:

» Decoded ana T— £ of size « ·«.. Table 4: Decoding of Interleaved GRS Codes

* Anay ' of size m. ¾ n over F.

* Set K of ssze ?- o mdeses of sotem e ss&es.

Ste s'

1) the m x systaaist assa

2) C&m tte fee Bstx ed sysiiome an\"¾y to be lis yw ni x M- 1) mstns. f ~ tJsat xstis&s t e cark m &ee

3f)≡ S(y > Miss) (κ ; ..:·" ' S .

M(¾) - ] " ] ; i ¾«) .

Xi«

Lei $t be: vask of the ro x ixi- 1·— r) mslrs. S ibmssd by

ihe esksmss of ks: ase isdesed by ·(?-, d ■■■■ i>.

3) L%_sg &s Fsasg-Tsessg aigs?siihm„ oss iie a olysciosss! A(.s)

o (ssss esi) degree Δ (d ft-r}f$ sacfe &at he Ms ssg

ess-groeBSS; is sstis¾x! i½ ' so s? oljisssms! ί*?{¾ if) iik

·*>·«,: ·:·:.·¾ * ?;..?· ; < " -f A " .

Mm mh λ{«) exists or Ihe csispu d X sn) does st d ide.

I " J f I - t¾esj steclare decoding fiiksre sad Ste .

m- '¾ ra siray i-. " by

here d sote fbanal dsSemifeSos.

* Dcayed ss¾y Y ■■■■ E s-f sw -m x «.

Example Methods of Use

[0107] The following discussion sets forth in detail the operation of some example methods of operation of embodiments. Figures 3, 4, 5A and 5B illustrate example procedures used by various embodiments. Flow diagrams 300, 400, and 500 include some procedures that, in various embodiments, are carried out by some of the electronic devices illustrated in Figure 8, or a processor under the control of computer-readable and computer-executable instructions. In this fashion, procedures described herein and in conjunction with flow diagrams 300, 400, and 500 are or may be implemented using a computer, in various embodiments. The computer-readable and computer-executable instructions can reside in any tangible computer readable storage media, such as, for example, in data storage features such as RAM 608, ROM 810, and/or storage device 612 (all of Figure 8). The computer-readable and computer-executable instructions, which reside on tangible computer readable storage media, are used to control or operate in conjunction with, for example, one or some combination of processor 608A, or other similar processor(s) 806B and 806C. Although specific procedures are disclosed in flow diagrams 300, 400, and 500, such procedures are examples. That is, embodiments are well suited to performing various other procedures or variations of the procedures recited in flow diagrams 300, 400, and 500. Likewise, in some embodiments, the procedures in flow diagrams 300, 400, and 500 may be performed in an order different than presented and/or not all of the procedures described in this flow diagram may be performed, additional operations may be added. It is further appreciated that procedures described in flow diagrams 300, 400, and 500 may be implemented in hardware, or a combination of hardware, with either or both of firmware and software (where the firmware and software are in the form of computer readable instructions).

[0108] Figure 3 is a flow diagram 300 of an example method of encoding information using a coding scheme.

[0109] In operation 310, in one example, a horizontal code 120 ( ) is selected, and in operation 320, a matrix 130 (¾,) is selected. [0110] In an example, A vertical code over F is defined as (C, Hm), which consists of all m x n matrices

= ( o I 3 i ... i ) Equation 64

[0111] over F (where } - stands for column/ of Γ, and is a transmitted array 206) such that each row in

Z= ( Ho o I H 0 1 1 ... I H n-\ ) Equation 85

[0112] is a codeword in a horizontal code 120 (C).

[0113] In an example, the code C is an w-levei interleaving of C, such that an m x n matrix / ' . ( Zo j Z] J ... I / ' .,. I ) Equation 88

[0114] over F is a codeword of C if each row in Z belongs to C. Each column in Z then undergoes encoding by an inner encoder of rate one, wherein the encoder of column / is given by the bijective mapping Z,→ Fl z,.

[0115] In operation 310, in one example, a horizontal code 120 (C) is selected as a linear [n, k, d\ code over .

[0116] In operation 320, in one example, a matrix 130 is selected from a plurality of matrices 130. As discussed above, a matrix 130 (¾,) is an m x (mn) matrix over F that satisfies the following two properties a positive integer (δ): (a) Every subset of δ - 1 columns in H in is linearly independent (i.e., H n is a parity-check matrix of a linear code over F of length m x n and minimum distance of at least S; and

(b)

Hin = ( H 0 i Hi I ... i H„-\ ) Equation 67

[0117] with Ho, Hi, ... , H„-i being m x m sub-matrices of ¾, wherein each H. 0 is invertibie over ,

[0118] In operation 330, in one example, information symbols 210 are encoded based at least upon the code C. In 340, each column in Z undergoes encoding by an inner encoder of rate one, wherein the encoder of column j is given by the bijective mapping Ζ,-→ i l : '/..·.

[0119] Figure 4 is a flow diagram 400 of an example method of communicating information reliably.

[0120] In operation 410, in various examples, an array of encoded symbols 21 1 is transmitted. In an example, an array 206 is altered such that encoded symbols 21 1 in an array 206 become a corrupted array 200 (T).

[0121] In operation 420, in various examples, a received array 200 (Y) of possibly-corrupted encoded symbols 210 is received. The array 200 may be received by a device comprising a decoder.

Ό122] In an example, an m x n received array 200 (Y) } ( /.: 0 j H3 i '' , I --3 ) Equation 88

[0123] where received array 200 contains τ + θ < (d + δ - 3)/2 erroneous columns.

[0124] In operation 430, in various examples, a received array 200 of encoded symbols 210 is decoded. Using one of the examples described herein for decoding, received array 200 (Y) is decoded back into transmitted array 208 ( ).

[0125] Figure 5 is a flow diagram 500 of encoding and decoding information symbols 210. Tabie 2 shows an example of operations 510-560, and Tables 3 and 4 show examples of operations 570-599.

[0126] in operation 300, in various examples, information symbols 210 are encoded using a coding scheme 100.

[0127] In operation 400, in various examples, encoded symbols 210 are transmitted, received, and decoded.

[0128] In operation 510, when included, a syndrome array (S) is computed. For example, the syndrome array may be of size m x ( d - 1 ) and shown by

V (/Λ Ό l l , V, ! ... ! Η Κ .,Υ ; ,.|) GRS Equation 69

[0129] In operation 520, when included, in various examples, a modified syndrome array is computed. For example, a modified syndrome array is computed to be the unique ρ x (d - 1) matrix that satisfies the congruence σ ( y , x ) ≡ S ( y . . ) ~ | ( I - j x )(mod{ x

je k Equation 70

[0130] Note that in various embodiments, the term a s - is the same as the ones used above

[0131] In operation 530, when included, in various examples, if there are additional symbol erasures 240 in the received array 200 repeat operations 531 , 532, and 533. For example, for every ί s < g>, operations 531 , 532, and 533 are performed.

[0132] In operation 531 , when included, in various examples, a row in a unique row matrix is computed. σ ( y , x ) ≡ B (t' } ( y )( y , x )( 1 - , Λ )(mod{ x d ~ l , y e }),

Equation 71

[0133] where

B (n ( y ) = . Equation 72

[0134] In operation 532, when included, in various examples, a decoder is applied for the horizontal code 120 based at least on the syndrome array and a row in the matrix. For example, e ^ (i.e., entry j> in e (/' } ) by applying a decoder for CGRS (horizontal code 120 utilizing a GRS code) using row ^ - 1 in ( J as syndrome and assuming that columns indexed by K { j s } are erased. Then

Equation 73 3*4

[0135] In operation 533, when included, in various examples, the received array and the syndrome array are updated. For example, the received array (Y) 200 and the syndrome array (8) are updated as in Equations 74 and 75.

Y ( y , x ) < ■■■■ Y ( y , x ) - ε K> t · x Ji y Kl Equation 74

S(y,x) - S(y,x) - e KttJi Ύ ,^ (x;a jt )T m ( y; β j( ).

Equation 75

[0136] In operation 540, when included, in various examples, a decoder is applied for an inner array based at least on the syndrome array and a row in the matrix. For example, for every h e. (m a decoder is applied for inner linear code 120 (C'GRS) using row.¾ of S as syndrome and assuming that columns 260 indexed by A " are erased. Eis a m x n matrix, where the rows of E are the decoded error vectors for all h e m ) .

[0137] In operation 550, when included, in various examples, a first error array is computed. For example,

£ - { 0 _ 1 E 0 { -/ ' Ε,, J ... j H ~ 1 X E n _ x ). Equation 78

[0138] In operation 580, when included, in various examples, a received array of information symbols 210 is decoded by applying the error array to the received array 200 of encoded symbols 211. For example, transmitted array Γ 208 may be computed by array Y - ε, where Y - ε is an array of size m x n . [0139] In operation 570, when induded, in various examples, a syndrome array is computed. For example, the syndrome array (S) may be of size m x ( d - 1 ) and shown by

S = (HoTo ! Hi Y, i . . . I i 1.. ; V.. , ) H s Equation 77

[0140] In operation 571 , when included, in various examples, a modified syndrome array is computed. For example, the matrix (~S) is formed by the columns of S ( )' > x ) \ (1 ™ a ; x ) that are indexed by {p , d - l ) . In an exampie i = rank c ~ S).

[0141] In operation 572, when included, in various examples, a polynomial is computed using a Feng-Tzeng operation. In various examples, using a Feng- Tzeng process, a polynomial λ(χ) is computed of degree

A < ( d + u ~ - r ) / 2 such that the following congruence is satisfied for some polynomial C WX) with deg x O) ( , x ) < r + A : σ (>' , x )λ (x )≡ ω ( x , v )(mod x a ).

Equation 78

[0142] !f no such λ(χ) exists or the computed λ(χ) does not

divide 11 ;e ( ,, 0 _ a j x ) then the decoding has failed and stops. In one example, if the decoding fails flowchart 500 proceeds to step 580. In another example, if the decoding did not fail, flowchart 500 proceeds to step 573.

[0143] In operation 573, when induded, in various examples, an error array (E) is computed. In an example, an m x n error array (E) is computed by Equation 79: 38

- a · ω ( , a ~ )

if λ (oc / ) = 0

λ ; ! ) · M (a : ! )

- oc j co ( y , a ! )

if / £ K ,

λ ( 7* ) · M' ( T J )

0 otherwise

Equation 79

[0144] where ( )' denoted formal differentiation.

[0145] In operation 574, when included, in various examples, the received array 200 of information symbols 21 1 is decoded by applying the error array to the received array 200 of information symbols 21 1. In an example, an error array is computed with equation 80: ε = ( ~ ] E 0 \ H ~ 1 E j [ ... j // ; . E r , }. Equation 80

[0146] In an example, a transmitted array 208 is computed by applying the error array to the received array 200:

Γ = Τ - ε. Equation 81

[0147] In operation 580, when included, in various examples, the greatest common divisor is computed based on a left kernel of a second matrix. For example, as shown in step 4 of Table 3, a greatest common divisor a(y) is computed based at least on the left kernel of (-5). [0148] In operation 581 , when included, in various examples, a root sub-set and a polynomial are computed. For example, the set R and the polynomial A(y) are computed as in Equations 53 and 54. In an example, η = \R\.

[0149] In operation 582, when included, in various examples, a second matrix is computed. For example, a ( m - η ) x i d - 1 - σ ) second matrix (,S) is formed based at least on the rows of A(y) S (y r x) that are indexed by (/? , /« ) .

[0150] in an example, if η == : μ - 1 then operations 591 , 592 and 593 are

performed. In another example, if η <μ - 1 , operations 595, 598, 597, 598 and 599 are performed. One example of these operations can be seen in table 3 at steps 5 and 6.

[0151] In operation 591 , when included, in various examples, the shortest linear recurrence of any nonzero column in the second matrix is computed. For example, the shortest linear recurrence B(y) is computed for any nonzero column in 5'.

[0152] In operation 592, when included, in various examples, the root sub-set is computed. For example, the set

= !( . j ) : A( /3 ; )≠ 0 and A( ; ) = 0 } .

Equation 82

[0153] is computed.

[0154] In operation 593, when included, in various examples, the root sub-set is updated. In various examples the root sub-set is not updated. For example, if \R\ •Jog B(y) and \R 1 < m - η then update R <— R R '. [0155] As discussed above, in an example, if η <μ~ 1, operations 595, 596, 597, 598 and 599 are performed. An example of these options can be seen in table 3 at step 6.

[0156] in operation 595, when included, in various examples, a modified syndrome array is computed. For example, a modified syndrome array is computed to be the unique m x (d ■■■■ 1) matrix σ that satisfies the congruence: σ ( y ,x) ≡ S (y , x)M ( x )(mod x d "! ),

Equation 83

[0157] where

M (x) = Π (1 - a j x).

Equation 84

[0158] In an example, μ is the rank of the m x (d ■■■■ 1 - r) matrix (~5) formed by the columns of matrix a that are indexed by ( , d ■■■■ 1 ).

[0159] In operation 598, when included, in various examples, a polynomial is computed using a Feng-Tzeng operation. In various examples, using a Feng- Tzeng process, a polynomial λ( ) is computed of degree

Δ < ( d + μ ■■■■ ;·) / 2 such that the following congruence is satisfied for some polynomial ω νχ) with de i O ) ( y , x ) < r + Δ : σ (j , x )λ (x )≡ ω{ x , y )(mod x d 1 ). Equation 85

[0160] If no such λ(χ) exists or the computed .{ v) does not

divide 11 ,·. s ( I ~ oc ,- x ) then the decoding has failed and stops. In one example, if the decoding did not fail, flowchart 500 proceeds to step 597.

[0161] In operation 597, when included, in various examples, an error array is computed provided the Feng-Tzeng operation is successful. In an example, an m x « error array (E) is computed by Equation 79:

0

{ 0 otherwise

Equation 88

[0162] where (·)' denoted formal differentiation.

[0163] In an example, steps 598 and 599 are performed for every nonzero column of the error array (E). This is shown in step 8(b) of Table 3 (where operation 598 correlates with step 6(b)(3) and step 599 correlates with step 6(b)(ii)

[0164] In operation 598, when included, in various examples, a decoder for the inner word 120 is applied. A decoder for a GRS code is applied with the parity- check matrix H as in equations 82 and 83 above (i.e.,

H ais = ( ν ^, - β i. j h (>n - ), ^ >n Equation 87 [0165] where

_ { ^ I 7 . , A ( /? ; , ) if A ( i? ;; 7 ) ≠ o

V — )l

{ 1 otherwise

Equation 88

[0166] where Ej is a syndrome array, to produce an error vector £ , .

[0167] In operation 599, when included, in various examples, the corrupted array is updated provided applying the decoder to the inner codeword 210 is successful.

For example, provided that operation 598 is successful, E * - and a received array is updated. For example, Yy <— Yy - ε* and

S ( y , ) < ■■■■ S ( y , x ) - E ". T d __ , ( x ; «r yi ).

Example Computer System

[0168] With reference now to Figure 6, all or portions of some embodiments described herein are composed of computer-readable and computer-executable instructions that reside, for example, in computer-usable/computer-readable storage media of a computer system. That is, Figure 8 illustrates one example of a type of computer (computer system 800) that can be used in accordance with or to implement various embodiments which are discussed herein. It is appreciated that computer system 600 of Figure 6 is an example and that embodiments as described herein can operate on or within a number of different computer systems including, but not limited to, general purpose networked computer systems, embedded computer systems, routers, switches, server devices, client devices, various intermediate devices/nodes, stand alone computer systems, media centers, handheld computer systems, multi-media devices, and the like. In one embodiment, computer system 800 may be a single server. Computer system 600 of Figure 6 is well adapted to having peripheral tangible computer-readable storage media 602 such as, for example, a floppy disk, a compact disc, digital versatile disc, other disc based storage, universal serial bus "thumb" drive, removable memory card, and the like coupled thereto. The tangible computer- readable storage media is non-transitory in nature.

[0169] System 600 of Figure 8 includes an address/data bus 804 for

communicating information, and a processor 606A coupled with bus 604 for processing information and instructions. As depicted in Figure 8, system 800 is also well suited to a multi-processor environment in which a plurality of processors 608A, 808B, and 608B are present. Conversely, system 800 is also well suited to having a single processor such as, for example, processor 806A. Processors 608A, 808B, and 608B may be any of various types of microprocessors. System 800 also includes data storage features such as a computer usable volatile memory 608, e.g., random access memory (RAM), coupled with bus 804 for storing information and instructions for processors 806A, 808B, and 806B. System 600 also includes computer usable non-volatile memory 810, e.g., read only memory (ROM) coupled with bus 604 for storing static information and instructions for processors 608A, 806B, and 608B. Also present in system 800 is a data storage unit 812 (e.g., a magnetic or optica! disk and disk drive) coupled with bus 604 for storing information and instructions. System 800 may also include an alphanumeric input device 814 including alphanumeric and function keys coupled with bus 804 for communicating information and command selections to processor 606A or processors 808A, 606B, and 808B. System 600 may also include cursor control device 618 coupled with bus 804 for communicating user input information and command selections to processor 608A or processors 808A, 608B, and 808B. In one embodiment, system 800 may also include display device 818 coupled with bus 604 for displaying information. [0170] Referring still to Figure 8, display device 818 of Figure 8, when included, may be a liquid crystal device, cathode ray tube, plasma display device or other display device suitable for creating graphic images and alphanumeric characters recognizable to a user. Cursor control device 616, when included, allows the computer user to dynamically signal the movement of a visible symbol (cursor) on a display screen of display device 818 and indicate user selections of selectable items displayed on display device 618. Many implementations of cursor control device 816 are known in the art including a trackball, mouse, touch pad, joystick or special keys on alphanumeric input device 614 capable of signaling movement of a given direction or manner of displacement. Alternatively, it will be appreciated that a cursor can be directed and/or activated via input from alphanumeric input device 614 using special keys and key sequence commands. System 600 is also well suited to having a cursor directed by other means such as, for example, voice commands. System 600 also includes an I/O device 620 for coupling system 600 with external entities. For example, in one embodiment, I/O device 620 is a modem for enabling wired or wireless communications between system 800 and an external network such as, but not limited to, the Internet.

[0171] Referring still to Figure 6, various other components are depicted for system 600. Specifically, when present, an operating system 622, applications 624, modules 626, and data 628 are shown as typically residing in one or some combination of computer usable volatile memory 608 (e.g., RAM), computer usable non-volatile memory 810 (e.g., ROM), and data storage unit 812. In some embodiments, all or portions of various embodiments described herein are stored, for example, as an application 824 and/or module 628 in memory locations within RAM 608, computer-readable storage media within data storage unit 812, peripheral computer-readable storage media 802, and/or other tangible computer- readable storage media. [0172] Embodiments of the present technology are thus described. While the present technology has been described in particular examples, it should be appreciated that the present technology should not be construed as limited by such examples, but rather construed according to the following claims.