Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND METHOD FOR DECODING REED-MULLER CODES
Document Type and Number:
WIPO Patent Application WO/2020/150600
Kind Code:
A1
Abstract:
Various embodiments are directed to Reed-Muller decoding systems and methods based on recursive projections and aggregations of cosets decoding, exploiting the self-similarity of RM codes, and extended with list-decoding procedures and with outer-code concatenations

Inventors:
YE MIN (US)
ABBE EMMANUEL (CH)
Application Number:
PCT/US2020/014079
Publication Date:
July 23, 2020
Filing Date:
January 17, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV PRINCETON (US)
ECOLE POLYTECHNIQUE FED LAUSANNE EPFL (CH)
International Classes:
H04L29/06
Foreign References:
US20120185755A12012-07-19
US20160352463A12016-12-01
US8386879B22013-02-26
US20140153625A12014-06-05
US20040064779A12004-04-01
US20080209304A12008-08-28
Other References:
SALEEMI: "Coding Theory via Groebner Bases", 14 February 2012 (2012-02-14), pages 1 - 96, XP055725706, Retrieved from the Internet [retrieved on 20200501]
BURNASHEV M V ET AL.: "IEEE TRANSACTIONS ON INFORMATION THEORY", 2009, IEEE, article "Error Exponents for Two Soft-Decision Decoding Algorithms of Reed-Muller Codes"
CHEN JIN ET AL.: "Research and implementation of an improved reed decoding algorithm", PROC. 6TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING, 26 August 2002 (2002-08-26)
Attorney, Agent or Firm:
WALL, Eamon, J. (US)
Download PDF:
Claims:
What is claimed is:

1. A method for decoding Reed-Muller (RM) encoded data, the method being implemented via code stored on a non-transient medium in a receiver and comprising:

for each received word of RM encoded data, projecting the received word onto each of a plurality of cosets of different subspaces to form thereby a respective plurality of projected words;

recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words; and

aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of RM encoded data.

2. The method of claim 1, wherein said projecting is performed in accordance with subspaces of dimension 1.

3. The method of claim 1, wherein said projecting is performed in accordance with subspaces greater than dimension 1 and fixed over a number of recursive decoding iterations.

4. The method of claim 1, wherein said projecting is performed in accordance with subspaces greater than dimension 1 and variable over a number of recursive decoding iterations.

5. The method of claim 1, wherein said aggregation is performed in accordance with a majority voting method.

6. The method of claim 1, wherein said aggregation is performed in accordance with one of a majority voting method, a multi-step power iteration method, a spectral method, and a semi-definite programming method.

7. The method of claim 1, wherein said aggregation selects only a subset of projected words according to a rule of selection based on the decoding of the projected words.

8 The method of claim 1, further comprising: decomposing of RM codes that reduce at least one of r and m parameters of the code individually using Plotkin transformation; and

composing the decomposed RM codes with decoded projected word components.

9. The method of claim 1, wherein for each of the respective plurality of projected words, said steps of decoding and aggregating are performed in parallel.

10. An apparatus for decoding Reed-Muller (RM) encoded data, the apparatus comprising a processor configured to:

for each received word of RM encoded data, projecting the received word onto each of a plurality of cosets of different subspaces to form thereby a respective plurality of projected words;

recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words; and

aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of RM encoded data.

11. A tangible and non-transient computer readable storage medium storing instructions which, when executed by a computer, adapt the operation of the computer to provide a method of decoding Reed-Muller (RM) encoded data, the method comprising:

for each received word of RM encoded data, projecting the received word onto each of a plurality of cosets of different subspaces to form thereby a respective plurality of projected words;

recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words; and

aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of RM encoded data.

12. A computer program product wherein computer instructions, when executed by a processor in a computing device, adapt the operation of the computing device to provide a method of decoding Reed-Muller (RM) encoded data, the method comprising:

for each received word of RM encoded data, projecting the received word onto each of a plurality of cosets of different subspaces to form thereby a respective plurality of projected words; recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words; and

aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of RM encoded data.

13. A computer program product wherein computer instructions, when executed by a processor in a computing device, adapt the operation of the computing device to provide a method of list decoding encoded data by any error-correcting codes, comprising:

identifying a plurality (t) of most noisy bits in the received word for a choice of t; identifying each of a plurality of possible cases of the t most noisy bits;

for each identified case, obtaining a decoding result from a unique decoding algorithm to provide thereby a list of 2l codewords; and

performing a maximal likelihood decoding among each of the list of 2l codewords to provide thereby the final decoding result.

14. The method of claim 13, wherein an outer code is used for the information bits, and said list decoding utilizes only those information bits forming a codeword of the outer code.

15. The method of claim 13, wherein said list decoding uses a twin code selected from rows in a squared RM matrix having largest conditional mutual information, and further comprises successive cancelation decoding of the twin code.

16. The method of claim 15, further comprising list decoding a plurality of outer codes.

17. A method for decoding a code, comprising:

for each received word of encoded data, mapping the received word onto a plurality of projected words via a code-specific projection technique;

recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words;

aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of the original code.

18. The method of claim 17, further comprising utilizing list-decoding and code concatenation.

Description:
SYSTEM AND METHOD FOR DECODING REED-MULLER CODES

CROSS-REFERENCE TO RELATED APPLICATION

[0001] This application claims the benefit of provisional patent application Serial No. 62/793,769 filed on January 17, 2019, entitled SYSTEMS AND METHODS OF DECODING REED-MULLER CODES (Attorney Docket No. Princeton-6620 IP), which provisional patent application is incorporated herein by reference in its entirety.

FIELD OF THE DISCLOSURE

[0002] The present disclosure relates generally to systems and methods for information encoding and decoding and, more particularly, to methods for decoding Reed-Muller (RM) codes and variants thereof.

BACKGROUND

[0003] This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present invention that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present invention. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.

[0004] Reed-Muller (RM) codes are among the oldest families of error-correcting codes. As compared to polar codes, RM codes have in particular the advantage of having a simple and universal code construction, though RM codes do not possess yet the generic analytical framework of polar codes (i.e., polarization theory). It has been shown that RM codes achieve capacity on the Binary Erasure Channel (BEC) at constant rate, as well as for extremal rates for BEC and Binary Symmetric Channels (BSC), but obtaining such results for a broader class of communication channels and rates remains open.

[0005] Unfortunately, an important missing component is for RM codes is that of a guaranteed efficient decoder for RM codes that is competitive in the low rate/block-length regime.

SUMMARY OF THE INVENTION

[0006] Various deficiencies in the prior art are addressed below by the disclosed systems, methods and apparatus configured for decoding Reed-Muller codes (and variants thereof) over any binary input memoryless channels. Various embodiments include decoders based on recursive projections and aggregations of cosets decoding, exploiting the self-similarity of RM codes, and extended with list-decoding procedures and with outer-code concatenations. Various embodiments include RM decoders of particular utility within the context of specific regimes of interest, such as short code length (e.g., < 1024 bits) and low code rate (e.g., < 0:5) regimes contemplated for use within the emerging 5G communications and Internet of Things (IoT) applications.

[0007] A method for decoding Reed-Muller (RM) encoded data according to one embodiment comprises: for each received word of RM encoded data, projecting the received word onto each of a plurality of cosets of different subspaces to form thereby a respective plurality of projected words; recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words; and aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of RM encoded data.

[0008] Additional objects, advantages, and novel features of the invention will be set forth in part in the description which follows, and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.

BRIEF DESCRIPTION OF THE DRAWINGS

[0009] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the present invention and, together with a general description of the invention given above, and the detailed description of the embodiments given below, serve to explain the principles of the present invention.

[0010] FIG. 1 is a functional block diagram of a block coding system benefiting from the various embodiments.

[0011] FIG. 2 graphically depicts a Recursive Projection-Aggregation decoding algorithm for third order RM codes according to an embodiment;

[0012] FIGS. 3-4 are flow diagrams of decoding methods according to various embodiments; and

[0013] FIG. 5 depicts a high-level block diagram of a computing device suitable for use within the context of the various embodiments. [0014] It should be understood that the appended drawings are not necessarily to scale, presenting a somewhat simplified representation of various features illustrative of the basic principles of the invention. The specific design features of the sequence of operations as disclosed herein, including, for example, specific dimensions, orientations, locations, and shapes of various illustrated components, will be determined in part by the particular intended application and use environment. Certain features of the illustrated embodiments have been enlarged or distorted relative to others to facilitate visualization and clear understanding. In particular, thin features may be thickened, for example, for clarity or illustration.

DETAILED DESCRIPTION OF THE INVENTION

[0015] The following description and drawings merely illustrate the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its scope. Furthermore, all examples recited herein are principally intended expressly to be only for illustrative purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions. Additionally, the term, "or," as used herein, refers to a non- exclusive or, unless otherwise indicated (e.g.,“or else” or“or in the alternative”). Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.

[0016] The numerous innovative teachings of the present application will be described with particular reference to the presently preferred exemplary embodiments. However, it should be understood that this class of embodiments provides only a few examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed inventions. Moreover, some statements may apply to some inventive features but not to others. Those skilled in the art and informed by the teachings herein will realize that the invention is also applicable to various other technical areas or embodiments, such as seismology and data fusion.

[0017] Various deficiencies in the prior art are addressed below by the disclosed systems, methods and apparatus configured for decoding Reed-Muller codes over any binary input memoryless channels. Various embodiments include decoders based on recursive projections and aggregations of cosets decoding, exploiting the self-similarity of RM codes, and extended with list-decoding procedures and with outer-code concatenations. Various embodiments include RM decoders of particular utility within the context of specific regimes of interest, such as short code length (e.g., < 1024 bits) and low code rate (e.g., < 0:5) regimes contemplated for use within the emerging 5G communications and Internet of Things (IoT) applications.

[0018] FIG. 1 depicts a high level block diagram of a block coding/decoding system benefiting from the various embodiments. Specifically, FIG. 1 depicts a block diagram of a block coding/decoding system 100 including a transmit side 102 and a receive side 104.

[0019] On the transmit side 102, the system 100 includes an (n,k;d) linear block channel encoder 106 wherein a block of "k" information bits received from an information source encoder 108 is encoded to output a codeword of "n" bits in length (wherein n>k). The channel encoder 106 preferably implements an error control code. An example of the information source encoder 108 is a vocoder or data compressor. The code words output from the channel encoder 106 are then optionally rearranged by an interleaver 110. A modulator 112 then maps the rearranged code words into waveforms suited for transmission over a communications channel 114. Modulator 112 may comprise, illustratively, a known modulator having an M-ary signal constellation (e.g., quadrature amplitude modulation (QAM), phase shift keying (PSK) and the like). The communications channel 114 may comprise a wired or wireless medium which suffering from error and/or distortion introducing problems such as fading, interference, noise and the like.

[0020] On the receive side 104, the system 100 includes an appropriate demodulator 116 that demodulates the communications channel 114 transmitted communication and outputs the rearranged code words. The estimated code words are then reordered (i.e., de-rearranged) by a de-interleaver 118 if necessary. An (n,k;d) linear block channel decoder 120 then processes the reordered estimated code words to generate estimates of the information bits for output to an information source decoder 122. The channel decoder 120 preferably comprises a maximum likelihood decoder for the selected error control code which utilizes soft decision decoding.

[0021] The block coding/decoding system 100 of FIG. 1 benefits from the use of RM channel encoding/decoding functions such as discussed herein, and especially the recursive projection-aggregation decoding of RM codes as discussed herein. The remaining discussion will assume that RM encoded data generated by, for example, the channel encoder 106 is subsequently decoded by the channel decoder 120. As such, the functions of the channel decoder 120 and similar structures will be the focus of the following discussion. [0022] The system 100 of FIG. 1 is illustrative of only one example of a use for the various embodiments described herein. In particular, it is noted that while FIG. 1 depicts a system wherein various embodiments of decoders and/or decoding methods are used within the context of a data transmitting/receiving system, the various embodiments also find utility within the context of data storage systems.

[0023] Generally speaking, the various embodiments find utility within the context of any system, method or component thereof wherein RM or related encoding/decoding is used. Further, the various embodiments may be used in conjunction or concatenation with other codes, such as in the form of outer-codes, inner-codes, or any other components of various coding schemes.

Recursive Projection-Aggregation (RPA) Decoding Embodiments

[0024] Various embodiments are based on the observations of the inventors that drawbacks in polar codes and, in particular, CRC-aided polar codes at short to medium block lengths arise from inherent weakness of the polar code itself. The inventors note that advantages of Reed- Muller (RM) codes over polar codes include: (1) better performance at short to medium block length in agreement with better scaling law, and (2) simple and universal code construction that is independent of the channels. As such, the inventors have disclosed herein various encoding and decoding methods, apparatus and computer implementations thereof that provide greatly improved error-correcting performance of RM codes over both previous decoding methods for RM codes and polar codes with the same parameters. The disclosed methods, apparatus and computer implementations thereof also allow for natural parallel implementations, in contrast to the Successive Cancellation List (SCL) decoder of polar codes.

[0025] RM codes are a family of error correcting codes that allow for data compression and transmission, such as to facilitate the transfer of information from a transmitter to a receiver over a noisy medium (e.g., as happens in cell phones, computer hard disks, deep-space communications, etc.). The various embodiments provide new decoding methods for RM codes as well as, in some embodiments, modifications of the RM codes themselves. The disclosed methods, apparatus and computer implementations thereof provide excellent performance in low code rate (< 1/2) and short code length (< 1024) regimes.

[0026] As described in detail herein, various embodiments comprise systems, methods, apparatus, mechanisms, algorithms and the like for efficiently decoding RM codes over binary input (typically) memoryless channels. Various embodiments are based on projecting the code and reducing its parameters, recursively decoding the projected codes, and aggregating the reconstructions. These exploit in particular the self-similarity structure of RM codes ensuring that quotient space codes for RM codes are again RM codes. Also provided are embodiments further providing list-decoding and code concatenation extensions of the various embodiments.

[0027] It is noted that the RPA algorithms/decoders described herein, and variations thereof, with list decoding is able to achieve an optimal performance of maximum likelihood decoding in some of the regimes at low block-length and rate. Further, the RPA algorithms/decoders and variants thereof without list decoding provide improved performance when compared to polar code algorithms/decoders plus CRC, and without requiring the addition of a list decoding procedure. In this manner, the various embodiments provide near optimal performance for practical regimes of parameters, and with an improved computation and power consumption due such as when avoiding list decoding procedures.

[0028] The discussion of the various embodiments will be provided in accordance with the following notation and background on RM codes. The term is used herein to denote sums over F 2 such as a polynomial ring F 2 [Z 1 , Z 2 , ... , Z m ] of m variables. Since Z 2 = Z in F 2 , the following set of 2 m monomials forms a basis of F 2 [Z 1 , Z 2 , ... , Z m ] :

[0029] The next step is to associate every subset with a row vector v m (A) of length 2 m , whose components are indexed by a binary vector

The vector v m (A) is defined as follows:

where v m (A,z) is the component of v (A ) indexed by z. That is, v m (A,z) is the evaluation of the monomial the set of vectors

forms a basis of the r-th order Reed-Muller code RM(m, r) of length n :=2"‘ and dimension

[0030] Definition 1. The r-th order Reed-Muller code RM(m, r) code is defined as the

[0031] In other words, each vector v m (A) consists of all the evaluations of the monomial at all the points in vector space , and each codeword

corresponds to an m— variate polynomial with degree at most r. The coordinates of the codeword c are also indexed by the binary vectors z Î E, such that c = (c(z), z Î E). Let B be an v-dimensional subspace of E, where The quotient space E/B consists of all the

cosets of B in E, where every coset TTias form T = z + B for some z Î E. For a binary vector . we define its projection on the cosets of B as:

which is the binary vector obtained by summing up all the coordinates of y in each coset T Î E/B. Here the sum is over F 2 and the dimension of y/ B is n/|B|.

[0032] By way of example, if c is a codeword of RM(m, r ), then C/ B is a codeword of RM(m— s, r— s), where s is the dimension of B. Various embodiments address the case s=l; namely, one-dimensional subspaces. More precisely, let y = (y(z), z Î E) be the output vector of transmitting a codeword of RM(m, r) over some BSC channel.

[0033] FIG. 2 graphically depicts a Recursive Projection-Aggregation decoding algorithm for third order RM codes according to an embodiment. Specifically, FIG. 2 and Algorithm 1 (below) together depict an exemplary decoding algorithm.

[0034] The depicted decoding algorithm is defined in a recursive manner: For every one- dimensional subspace B, projection y/ B is obtained. Then the decoding algorithm is used for RM(m— 1, r— 1) to decode y/ B , where the decoding result is denoted as Since every one-dimensional subspace of E consists of 0 and a non-zero element, there are n-1 such subspaces in total. After the projection and recursive decoding steps, n-1 decoding results are obtained as A majority voting scheme is then used to aggregate these

decoding results together with y to obtain a new estimate of the original codeword. Then y is

updated as , and the entire procedure is performed again for up to N max rounds. It is noted that if (see line 6), then y is a fixed (stable) point of the algorithm and will remain

unchanged for the next iterations. In this case, the iteration is exited at line 1 (see lines 6-8). In various embodiments, a maximal number of iterations is set as Nmax = [m/2] to prevent the program from running into an infinite loop, and typically [m/2 ] iterations are enough for the algorithm to converge to a stable y.

[0035] This high-level description is summarized in FIG. 2 and Algorithm 1 (below). While this description focuses on the decoding algorithm over BSC, other embodiments discussed below extend the algorithm based on log-likelihood ratios (LLRs) that allow the 1ecoding of RM codes over any binary-input memoryless channels, including the AWGN channel.

Algorithm 1 Pseudo-Code: The RPA RM Decoding Function For BSC

List Decoding Procedure

[0036] Various embodiments utilize a list decoding procedure to further decrease decoding error probability. For example, assume a unique decoding algorithm“decodeC” for a codeword C received via a binary-input memoryless channel . Without loss of generality, assume that the algorithm“decodeC” is based on the LLR vector of the channel output, where the LLR of an output symbol x e W is defined as:

Clearly, if |LLR(x) | is small, then x is a noisy symbol, and if |LLR(x) | is large, then x is relatively noiseless.

[0037] The list decoding procedure works as follows. Suppose that y = (y 1, y 2 , ... , y n ) is the output vector when sent a codeword of C over the channel W . A first step is to sort from small to large. Without loss of generality, assume that are the three smallest components in the LLR vector, meaning that y x , y 2 and y 3 are the three most noisy symbols in the channel outputs (taking three arbitrarily). Next, enumerate all the possible cases of the first three bits of the codeword c = (c 1; c 2 , ... , c n ): The first three bits (c 1 , c 2 , c 3 ) can be any vector in so there are 8 cases in total, and for each case change the value of LLR (y 1 ), LLR(y 2 ), LLR(y 3 ) according to the values of c 1 , c 2 , c 3 . More precisely, set LLR(y i ) = (— 1) ci L max for i = 1,2,3, where Lmax is some large real number. In practice, various embodiments may choose . For each of these 8 cases, various embodiments use“decodeC” to obtain a decoded codeword, which are denoted as . Finally, various embodiments calculate the posterior probability of

and choose the largest one as the final decoding result; namely, various

embodiments perform a maximal likelihood decoding among the 8 candidates in the list. Binary Symmetric Channels (BSC) Decoding Procedure

[0038] This section begins with the definition of the quotient code., and the illustrates how the quotient code of an RM code is also an RM code.

[0039] Definition 2. Let be integers, and let B be an s-dimensional subspace of A quotient code is defined as:

[0040] Lemma 1. Let be integers, and let B be an s-dimensional subspace of The code Q(m, r, B) is the Reed-Muller code RM(m— s, r— s).

[0041] It is noted that the various embodiments use of the case s = 1 in Lemma 1, in addition to using all subspaces and adding an iterative process. Since the RPA RM decoding function is presented above, the following discussion will be directed to the Aggregation function only, as depicted below with respect to Algorithm 2 below. It is noted that both and are indexed by the cosets

, and that is used to denote the coset containing z (see line 3 of Algorithm 2).

Algorithm 2 Pseudo-Code: The Aggregation Function For BSC

[0042] From line 3, it may be seen that the maximal possible value of changevote(z) for each z G E is n-1. Therefore the condition changevote on line 4 can be viewed as a

majority vote. As discussed below, this algorithm may be viewed as one step of a power iteration method to find the eigenvector of a matrix built from the quotient code decoding.

[0043] It is noted that Algorithms 1 and 2 are depicted as pseudo codes in a mathematical fashion for the ease of understanding. These pseudo-codes may be implemented as hardware or a combination of hardware and software using almost any programming language as known by those skilled in the art.

[0044] Proposition 1. The complexity of Algorithm 1 is O(n r logn) in sequential implementation and 0(n 2 ) in parallel implementation with 0(n r ) processors.

[0045] Proposition 2. Whether Algorithm 1 outputs the correct codeword or not is independent of the transmitted codeword and only depends on the error pattern imposed by the BSC channel. Specifically, let c Î RM(m, r) be a codeword of the RM code, and let e = (e(z), z Î E) be the error vector imposed on c by the BSC channel. The output vector of the BSC channel is y = c + e. Denote the decoding result as c = RPA_RM(y, m, r, N max ). Then the indicator function of decoding error 1 [c ¹ c] is independent of the choice of c and only depends on the error vector e. It is noted that this proposition is useful for simulations in that a transmission of an all-zero codeword over the BSC channel may be used to measure the decoding error probability.

[0046] Algorithm 2 may be viewed as a one-step power iteration of a spectral algorithm. More precisely, it is observed that: contain the estimates of

for all , where is the transmitted (true) codeword. The estimate

of is denoted as

[0047] As an example, to find a vector to agree with as many

estimates of these sums as possible is finding a vector to maximize the following:

[0049] Therefore:

[0050] Thus the task is to find: [0051] Given a vector , we define another vector by setting for all . In order to find the maximizing vector in (eq. 6), it suffices to find

[0052] Now m n x n matrix A is built from as follows: The rows

and columns of A are indexed by , and the following entry is set:

[0053] That is, for there is set , and

Under this definition, the optimization problem (eq. 5) becomes:

[0054] It is well known that this combinatorial optimization problem is NP-hard. In practice, a reasonable approach is to use the following spectral relaxation to obtain approximate solution:

[0055] A solution to this relaxed optimization problem is the eigenvector corresponding to the largest eigenvalue of A. One way to find this eigenvector is to use the power iteration method: that is, pick some vector v (e.g., at random), then A t v converges to this eigenvector when t is large enough. After rescaling A t v to make obtain the maximizing vector

in the relaxed optimization problem. In order to obtain the solution to the original optimization problem in (eq. 6), the embodiments only need to look at the sign of each coordinate of : If , then set and if , then set In this manner, the vector u that serves as an approximate solution to (eq. 6) is obtained. To summarize, an approximate solution to (eq. 6) . where v is some random vector

and t is some large enough integer. [0056] Denoting the output vector of Algorithm 2 as and defining another vector as ) for all . For the original received vector y, also defined is a vector u as for all An important observation in this section is that:

[0057] That is, the output of Algorithm 2 is in fact the same as a one-step power iteration of the spectral algorithm with the original received vector u playing the role of vector v above. It is also easy to see why (eq. 7) holds: According to (eq. 7), if otherwise. This is equivalent to saying that and otherwise. The vector y given by this rule is exactly the same as the output vector of Algorithm 2.

General Binary-Input Memoryless Channel Decoding Procedure

[0058] The decoding algorithm is directed to BSC, whereas this section will present an extension of Algorithm 1 that is suitable for use in decoding any binary-input memoryless channels and is based on LLRs (see (eq. 3) above). Similarly to Algorithm 1, the general algorithm is also defined recursively in that it we first assumes knowledge for decoding the ( r - l)-th order Reed-Muller code, after which it may be used to decode the r-th order Reed-Muller code. It is noted that a soft-decision FHT decoder may be used to allow for the decoding of the first order RM code efficiently for general binary-input channels. The soft-decision FHT decoder may be based on LLR, and the complexity is also O(n log n), as with as the hard- decision FHT decoder.

[0059] An FHT decoder for first order RM codes may be used. Specifically, various embodiments use to denote the transmitted (true) codeword and y = to denote the corresponding channel output. Given the output vector y, the ML decoder for first order RM codes aims to find c e RM(m, 1) to maximize

This is equivalent to maximizing the following quantity:

which is further equivalent to maximizing:

It is noted that codeword c is a binary vector. Therefore: where the shorthand notation may be expressed as: and the formula in (eq. 8) may be written as:

such that the goal is to find c e RM(m, 1) to maximize this quantity.

[0060] Every corresponds to a polynomial in of degree

one, so every codeword c may be expressed as a polynomial In this manner,

therefore where z 1 , z 2 , ... , z m are the coordinates of the vector z. The

task then becomes finding to maximize the following: ( eq io)

[0061] For a binary vector , the following is defined:

[0062] To find the maximizer of (eq. 10), a calculation may be made of

Since the vector s exactly the Hadamard Transform of the vector (L(z), z G E ),

this calculation may be made using the Fast Hadamard Transform with complexity O (n log n) . Once the values of are known, a value that

maximizes may be found. If then the decoder outputs the codeword c

corresponding to Otherwise, the decoder outputs the codeword c corresponding to Thus, various embodiments decode first order RM

codes for general channels in this manner. The next problem is how to extend (eq. 2) in a general setting. The purpose of (eq. 2) is mapping two output symbols (y(z), z Î T) whose indices are in the same coset T Î E/B to one symbol. This reduces the r-th order RM code to an r-l-th order RM code. For BSC, this mapping is simply the addition in F 2 . The sum y/ B (T) may be interpreted as an estimate of C/ B (T), where c is the transmitted (true) codeword. In other words:

where Y is the channel output random vector.

[0063] For general channels, a desired estimate of C/ B (T ) is based on the LLRs (L(z), z e T). More precisely, given(y(z), z e T), or equivalently given (L(z), z e T), it is desired to calculate the following LLR:

where Y is the channel output random vector.

[0064] Lemma 2. Suppose that r > 1. Let C be a random codeword chosen uniformly from RM(m, r), and let z and z' be two distinct vectors in E. Then the two coordinates (C(z), C(z')) of the random codeword C have i.i.d. Bemoulli-$l/2$ distribution.

[0065] By way of proof of Lemma 2, first define the following four sets:

[0066] To prove this lemma, it is only necessary to show that |A(0,0) | = |A(0,1) | = |A(1,0) | = |A(1,1) |. Since RM code is linear and the all one vector is a codeword of RM codes, the marginal distribution of the coordinate C(z) is Bemoulli-1/2 for every z G E . Thus providing the following:

[0067] Taking such that , there exists i G [m] such that . Since it is assumed that RM(m, r) contains the evaluation vector

of the degree- 1 monomial Z h evaluation vector is denoted as v, and it is known that v Without loss of generality, it is assumes that and . Then (given that for a set A and a vector v there is defined a set A + v := (a + v: a G A}), it may be stated that ( ) ( ), such that Conversely, it may also be stated that , such that Therefore, .

Similarly, it can be shown that I A(l, 1) | = |A(1,0) |. Taking these into (eq. 11) the following is obtained: I A(0, 0) | = |A(0,1) | = |A(1,0) | = |A( 1,1) 1, which completes the proof of Lemma 2.

[0068] L/ B (T) may now be calculated using the following model: Assume that S Ί and S 2 are i.i.d. Bemoulli-1/2 random variables, and transmitted over two independent copies of the channel The corresponding channel output random variables are denoted as Xi and Xi, respectively. Then for x 1, x 2 Î W,

[0069] Lemma 2 above allows the replacement of x 1 x 2 with (y(z), z Î T), such that the following is obtained:

(eq. 12)

[0070] The following is a decoding algorithm for general binary-input channels according to an embodiment. Specifically, in Algorithms 3 and 4 (below), the decoding result of the ( r - l)-th order RM code is denoted as y/ B (e.g., see line 7 of Algorithm 3), where

are indexed by the cosets T e E/B, and [z + B] is used to denote the coset

containing z (e.g., see line 3 of Algorithm 4).

[0071] Algorithm 3 is similar to Algorithm 1 : From line 8 to line 10 there is a comparison of with the original L(z) . If the relative difference between these two is below the threshold q for every z E E, then the values of L(z), z e E change very little in the instant iteration, and the algorithm reaches a“stable” state, such that it exits the for-loop on line 2. Various embodiments use q = 0.05 and we a maximal number of iterations N max = m/2, which is the same as in Algorithm 1. It is noted that greater or lesser values may be selected for use in different embodiments. The inventors note that the decoding error probability is non- increasing when decreasing the value of q, and the running time of the algorithm increases when decreasing Q. Simulations provide that the decoding error probability remains the same if continued decreasing of Q beyond 0.05, therefore 0=0.05 is a reasonable choice in many embodiments since a smaller Q will only increase the running time while not appreciably decreasing decoding error. On line 13, the algorithm simply produces the decoding result according to the LLR at each coordinate.

[0072] With respect to Algorithm 4, at line 3 the algorithm sets cumuLLR(z) = å z ' ¹z a(z, z')L(z') , where the coefficients a(z, z') can only be 1 or— 1. More precisely, a(z, z') is 1 if the decoding result of the corresponding (r-l)-th order RM code at the coset {z, z'} is 0, and a(z, z') is -1 if the decoding result at the coset {z, z'} is 1. Thus, the decoding result at the coset (z, z'} is an estimate of c(z) ® c(z'). If c(z) ® c(z') is more likely to be 0, then the sign of L(z) and L(z') should be the same. Here cumuLLR(z) serves as an estimate of L(z) based on all the other L(z'), z' ¹ z , so we assign the coefficient a(z, z') to be 1. Otherwise, if c(z) ® c(z') is more likely to be 1, then the sign of L(z) and L(z') should be different, so assign the coefficient ot(z, z') to be -1.

[0073] It is noted that Algorithms 3 and 4 are depicted as pseudo codes in a mathematical fashion for the ease of understanding. These pseudo-codes may be implemented as hardware or a combination of hardware and software using almost any programming language as known by those skilled in the art.

[0074] Proposition 3. The complexity of Algorithm 3 is O(n r logn) in sequential implementation and 0(n 2 ) in parallel implementation with 0(n r ) processors. It is also noted that the decoding error probability of Algorithm 3 is independent of the transmitted codeword for binary-input memoryless symmetric (BMS) channels. In various embodiments, the complexity is reduced by one or more of: jumping layers, taking larger subspaces, taking only a subset of projections, or combining the projections with other decompositions such as a Plotkin decomposition.

Algorithm 3 Pseudo-Code: The RPA RM Decoding Function For General Binary-Input

Memoryless Channels

Algorithm 4 Pseudo-Code: The Aggregation Function For General Binary-Input Memoryless

Channels

[0075] Proposition 4. The decoding error probability of Algorithm 3 is independent of the transmitted codeword for binary-input memoryless symmetric (BMS) channels.

[0076] Definition 3. A memoryless channel is a BMS channel if there is a permutation p of the output alphabet W such that

for all x G W.

[0077] Proposition 4. Let be a BMS channel. Let and be two codewords of RM(m/). Let Y 1 and Y 2 be the (random) channel outputs of transmitting and C2 over n=2 m independent copies of W, respectively. Let L (1) and L (2) be the LLR vectors corresponding to Y 1 and Y 2 . respectively (it is noted that Y 1 and Y 2 are random vectors, that the randomness comes from the channel noise and, as a result, L (1) and L (2) are also random vectors). Then for any c 1, c 2 Î RM(m, r), it can be stated that:

[0078] It is noted that this proposition is useful for simulations in that a transmission of an all-zero codeword over the BMS channel W may be used to measure the decoding error probability.

[0079] The above description presents a list decoding version of the RPA RM function. Since the main concept has been described above with respect to the other Propositions, what is provided herein below will be directed toward the pseudo code of the list decoding version. It is noted that the purpose of line 8 of Algorithm 5 is to make sure that is a codeword of RM code, which is not always true for the decoding result of the RPA RM function.

[0080] Finally, we present the following proposition on the memory requirement for sequential implementation of RPA decoder. A remarkable thing here is that the memory requirement for the list decoding version of RPA algorithm is 5 n, which is independent of the list size, in contrast to, for example, an SCL decoder of polar codes.

[0081] Proposition 5. The memory needed for sequential implementation of the RPA decoder without list decoding is no more than An, and the memory needed for sequential implementation of the RPA decoder with list decoding is no more than 5 n, where n is the code length. Note that the memory requirement for list decoding version does not depend on the list size.

[0082] As noted above, Algorithm 3 is written in compact fashion for the ease of understanding, but it is not space-efficient in practical implementation. A more robust algorithm is provided below as Algorithm 9, upon which various analysis of space complexity provided herein is generally based.

[0083] The most important difference between Algorithm 3 and Algorithm 9 is that in Algorithm 3 contemplates finishing of all the recursive decoding first, and then performing the aggregation step. By contrast, in Algorithm 9 the recursive decoding step and the aggregation step are interleaved together such that a significant amount of memory is conserved as compared to Algorithm 3.

[0084] A proof may start with an RPA decoder without list decoding, and may then prove by induction on r, the order of the RM code. For the base case of r= 1, the claim clearly holds. Now assume that the claim holds for all RM codes with order <r and prove it for order r. In Algorithm 9, there is needed n floating number positions to store the LLR vector and another n floating number positions to store the cumuLLR vector. Then the codewords are projected onto the cosets of each one-dimensional subspace sequentially. For each projected codeword, there is a need to decode a RM code with length n! 2 and order r- 1. By induction hypothesis, this takes 4*n/2=2n floating number positions. Therefore in total there is needed n+n+2n=4n floating number positions. This establishes the inductive step and completes the proof for the non-list-decoding version.

[0085] The memory requirement for list decoding version follows directly from that of initial embodiments described above: Since list decoding is performed sequentially (i.e., only decode one list at a time), the only extra memory needed in the list decoding version is the n floating number positions that are used to store a currently best known decoding result. Therefore, the space complexity for the list decoding version is 5 n.

[0086] Simplified RPA Algorithm For High Rate RM Codes

[0087] This section provide some simplified versions of the RPA decoder, which significantly accelerate the decoding process while maintaining the same (nearly optimal) decoding error probability for certain RM codes with rate >0.5.

[0088] As previously discussed, the decoding algorithm may be accelerated by using fewer subspaces in the projection step. Moreover, instead of using one-dimensional subspaces, various embodiments use a selected subsets of two-dimensional subspaces in the projection step. In particular, various embodiments only project onto the two-dimensional subspaces

spanned by two standard basis vectors of E. The standard basis vector of E are e (1) , ... , e (m) where e (i) is defined as the vector with 1 in the z-th position and 0 everywhere else. Then the two-dimensional subspaces may be written as where:

Algorithm 5 Pseudo-Code: The RPA LIST Decoding Function For General Binary-Input Memoryless Channels

[0089] It is noted that projection onto cosets of two-dimensional subspaces is different from onto that of one-dimensional subspaces: In the one-dimensional case, each coset only contains two coordinates, and the embodiment only needs to combine the LLR of two coordinates to obtain the LLR of the coset, as per (eq. 12). In the two-dimensional case, each coset contains four coordinates, and the embodiment needs to combine the LLR of four coordinates to obtain the LLR of the coset. Fortunately, the embodiment can use exactly the same idea as in the proof of Lemma 2 (above) to show that any four coordinates in a coset of a two-dimensional subspace are also independent. Therefore, the following counterpart of (eq. 12) may be obtained for a coset T of two-dimensional subspace assuming that

[0090] After projecting RM(m, r) onto the cosets of these two-dimensional subspaces, there is obtained RM codes with parameters in-2 and r- 2, as proved in Lemma 1. After decoding these projected codes RM(m— 2, r— 2), there is obtained , where

The procedure then moves to the aggregation step using both

the recursive decoding result and the original LLR vector L. In particular, when decoding c(z), the relevant coordinate where is the coset of B i,j that contains z. Now suppose that the other three vectors in apart from z itself are Then from and

is obtained the following estimate of the LLR of c(z):

Algorithm 6 Pseudo-Code: A Simplified RPA Decoding Function

Algorithm 7 Pseudo-Code: A Simplified Aggregation Function In The Simplified RPA

Algorithm

[0091] The embodiments may calculate such an estimate for all pairs of (i, j) such that Then finally we update the LLR of c(z) as the average of these estimates,

as follows:

[0092] Finally, as in all the previous sections, the embodiments may iterate this decoding procedure a few times for the LLR vector to converge to a stable value. Various embodiments may utilize quantization techniques to approximate these values and/or the LLRs to further improve decoder efficiency.

[0093] The decoding algorithm proposed in this section are denoted as the Simplified RPA algorithm, rather than the normal RPA algorithm proposed in the previous section. It is noted that in the recursive decoding procedure (i.e., when decoding RM(m— 2, r— 2) ) the embodiments may still use this simplified version of RPA algorithm instead of performing a full projection step. Since each time r is reduced by 2, if the original r is even then the procedure will not reach the first-order RM codes. In this case, the procedure uses the normal RPA decoder when it reaches the second-order RM codes.

[0094] In Algorithms 6 and 7 there is provided pseudo-codes for the Simplified RPA algorithm. It is noted that in lines 7-8 of Algorithm 6, there is distinction made between the cases of r being even and r being odd: For even r, eventually the procedure will need to decode a second-order RM code using the normal RPA decoder, while for odd r, the procedure only needs to decode first-order RM code in the final recursive step.

[0095] As is shown herein, by applying the list decoding version of the Simplified RPA algorithm, the various embodiments may decode RM(7,4) and RM(8,5) with list size no larger than 8 such that the decoding error probability is the same as that of ML decoder. Moreover, this version runs even faster than decoding lower rate codes such as RM(8,3).

Algorithm 8 Pseudo-Code: The RPA RM Decoding Function For BSC

Algorithm 9 Pseudo-Code: The RPA RM Decoding Function For General Binary-Input

[0096] Parallelization and acceleration. Advantageously, the various embodiments contemplate a decoding algorithm for RM codes that naturally allows parallel implementation, whereas the SCL decoder for polar codes is not parallelizable. An important step in various embodiments for decoding a codeword of RM(r,m) is to decode the quotient space codes which are in RM(r-1,m-1) codes, which can be decoded in parallel. Such a parallel structure is enables the achieving of high throughput with low latency.

[0097] Another way to accelerate the algorithm is to use only certain“voting sets”— that is, in the projection step, a subset of one-dimensional subspaces is selected instead of all the one-dimensional subspaces. Recursive decoding is still used, followed by the aggregation step. In this manner, the various embodiments may decode fewer RM(r-l,m-l) codes while, if the voting sets are chosen properly, obtaining a similar decoding error probability with shorter running time. An example of a concrete choice of voting is discussed above with respect to Algorithm 6, which indeed accelerates the decoding of high-rate RM codes with nearly -ML decoding error probability.

[0098] Various embodiments are especially well suited to RM(8,2) decoding since this is nearly optimal in terms of code length in the sense that the lower bound of code length is 251, which differs from the actual code length of RM codes by only 5. RM(9,2) is also close to optimal, where the lower bound on code length is 500. However, for RM codes with larger order (dimension) and larger code length, the lower bound differs from the actual code length by at least 50 such as, for example, RM(9,3) where the lower bound becomes 464.

[0099] Various embodiments make use of one-dimensional subspace reduction as discussed above. In further embodiments, changes may be made to the B t , ... , B n-t in the RPA decoding algorithms to any of the ^-dimensional subspaces, with different combinations possible. In above sections according to these embodiments the usual choice was s= 2, though s= 3, s= 4 and the like may also be used.

[0100] Various embodiments of the RPA decoding algorithms may also be used to decode other codes that are supported on a vector space, or any code that has a well-defined notion of “code projection” that can be iteratively applied to produce eventually a“trivial” code (i.e., one that can be decoded efficiently). In the case of RM codes, the quotient space projection has the specificity of producing again RM codes, and the trivial code is the Hadamard code that can be decoded using the FHT.

[0101] Various embodiments contemplate spectral decompositions and/or other relaxations in the Aggregation step instead of majority voting. Depending on the regime used, one may take multiple iteration of the power-iteration method.

[0102] As noted herein,“Algorithm 1” and related text provides a preferred description for an exemplary BSC, while“Algorithm 3” and related text provides a preferred description for general channels. It is noted that an RM-RPA decoder is discussed at“Algorithm 3” and related text, while an RM-RPA list decoder is discussed at“Algorithm 5” and related text. Further, an RM-RPA list decoder with 1 parity is discussed at“Algorithm 8” and related text, including where the number of parities in the outer code is 1.

[0103] FIG. 3 is a flow diagram of a decoding method according to an embodiment. Specifically, as described in detail above and further illustrated in FIG. 3, one embodiment is a method of decoding data encoded with a Reed-Muller (RM) code in which a received word of RM encoded data is decoded in a recursive manner (step 310); the received word is projected on the cosets of different subspaces to form the projected words (step 320), each projected word is decoded recursively, and the decoding of all projected words are aggregated to obtain a decoding of the original received word.

[0104] For the projection phase of step 320, the number and the choice of the subspaces may be a tuning parameter, and a preferred embodiment may be to use subspaces of dimension 1. Dimensions of 2, 3 and so on may also be used (box 325).

[0105] For the aggregation phase of step 330, the aggregation function may be a tuning parameter, and a preferred embodiment may be to use majority voting. Multi-step power iteration methods, spectral methods, semi-definite programming methods and the like may also be used.

[0106] Generally speaking, the method 300 of FIG. 3 and the above-described algorithms contemplates that for every one-dimensional subspace, the method first obtains the corresponding projection of the original received word onto the cosets of this subspace. Then the decoding algorithm of a lower-order RM code is used to decode the projected vector for each subspace. Finally, a majority voting scheme (or other scheme) is used to aggregate the original received word as well as the decoded words from all the one-dimensional subspaces. This procedure is iterated several times until it converges to a stable point. Then this stable point is taken as the output; that is, the decoded form of the RM encoded word.

[0107] Thus, one embodiment comprises a method for decoding Reed-Muller (RM) encoded data, comprising: for each received word of RM encoded data, projecting the received word onto each of a plurality of cosets of different subspaces to form thereby a respective plurality of projected words; for each received word of RM encoded data, recursively decoding each of the respective plurality of projected words to form a respective plurality of decoded projected words; for each received word of RM encoded data, aggregating each of the respective decoded projected words to obtain thereby a decoding of the corresponding received word of RM encoded data. In some embodiments, the projecting may be performed in accordance with subspaces of dimension 1. In some embodiments, the projecting may be performed in accordance with subspaces of dimension 2 or 3. In some embodiments, the aggregation is performed in accordance with majority voting. In some embodiments, the aggregation is performed in accordance with one of a majority voting method, a multi-step power iteration method, a spectral method, and a semi-definite programming method. [0108] FIG. 4 is a flow diagram of a list decoding method according to an embodiment. Specifically, as described in detail above and further illustrated in FIG. 3, one embodiment is a method of decoding data encoded with a Reed-Muller (RM) code or Polar Code in which received words of encoded data are processed by identifying a plurality (t) of most noisy bits in the received word for a choice of t and identifying each of a plurality of possible cases of the t most noisy bits (step 420); wherein for each identified case, obtaining a decoding result from a unique decoding algorithm to provide thereby a list of 2t codewords (step 430); and performing a maximal likelihood decoding among each of the list of 2t codewords to provide thereby the final decoding result (step 440). In various embodiments, an outer code is used for the information bits, and said list decoding utilizes only those information bits forming a codeword of the outer code.

[0109] The recursive projection-aggregation (RPA) methodology discussed above may be further improved by using list-decoding procedures and/or code-concatenation procedures to provide combined encoder methods; namely, RPA with list-decoding, RPA with code- concatenation, and RPA with list-decoding and code-concatenation. It is noted that the combined methods provide a performance level that improves upon that of optimal polar code decoders and approaches the optimal decoding performance for RM codes. The code- concatenation method modifies RM codes themselves.

[0110] Advantageously, the RPA decoding method can be applied to a broader class of error-correcting codes supported on vector spaces or any code that supports the type of operations used in the RPA algorithm, such as BCH, Reed-Solomon or expander codes.

[0111] The list-decoding procedure and the code-concatenation method can be composed with any decoding algorithm for any error correcting codes to reduce the decoding error probability.

Parallel Processing

[0112] In various embodiments, parallel processing implementations are provided wherein multiple processors or processing threads are used to process respective RM encoded words, or respective dimensions of an RM encoded word or perform other parallel processing operations configured to speed up the decoding process.

[0113] Specifically, an advantage of the disclosed RPA decoding methodology for RM codes over the SCL decoder for polar codes is that the disclosed RPA decoding methodology naturally allows parallel implementation while the SCL decoder is simply not parallelizable. An important key step in the disclosed RPA decoding methodology for decoding a codeword of RM(r,m) is to decode the quotient space codes which are in RM(r-l.m-l ) codes, and each of these quotient space codes can be decoded independently and in parallel. Such a parallel structure is crucial to achieving high throughput and low latency.

Extensions

[0114] Various embodiments contemplated by the inventors herein provide universal decoder functionality suitable for use in a wide variety of channel decoding and other applications.

[0115] It is noted that the methods, algorithms, techniques and the like for encoding, decoding and otherwise processing Reed-Muller codes, Polar codes and variations thereof discussed in the first appended document, second appended document, or discussed herein with respect to the various figures may be operably combined in part or in whole to provide various other and further embodiments and that such embodiments are contemplated by the inventors.

[0116] Various embodiments comprise systems and methods of encoding, decoding and otherwise processing Reed-Muller codes, Polar codes and variations thereof discussed in the first appended document, second appended document, or discussed herein with respect to the various figures that operate by combining in part or in whole the different components and code reductions to provide various other and further embodiments.

[0117] Various embodiments comprise systems and methods of applying recursive aggregation-projection algorithms to any code that supports the algorithm’s operations (e.g., BCH, Reed-Solomon or expander codes). In particular, taking any code on a finite field, summing pairs of components based on a matching of the components, iterating this projection procedure a number of time until the obtained word is decoded by a specific algorithm, and reverting the projection parts with aggregation functions.

[0118] FIG. 5 depicts a high-level block diagram of a computing device, such as a channel decoder or other computing device, suitable for use in performing functions described herein such as those associated with the various elements described herein with respect to the figures.

[0119] As depicted in FIG. 5, computing device 500 includes aprocessor element 503 (e.g., a central processing unit (CPU) and/or other suitable processor(s)), a memory 504 (e.g., random access memory (RAM), read only memory (ROM), and the like), a cooperating module/process 505, and various input/output devices 506 (e.g., a user input device (such as a keyboard, a keypad, a mouse, and the like), a user output device (such as a display, a speaker, and the like), an input port, an output port, a receiver, a transmitter, and storage devices (e.g., a persistent solid state drive, a hard disk drive, a compact disk drive, and the like)). [0120] It will be appreciated that the functions depicted and described herein may be implemented in hardware and/or in a combination of software and hardware, e.g., using a general purpose computer, one or more application specific integrated circuits (ASIC), and/or any other hardware equivalents. In one embodiment, the cooperating process 505 can be loaded into memory 504 and executed by processor 503 to implement the functions as discussed herein. Thus, cooperating process 505 (including associated data structures) can be stored on a computer readable storage medium, e.g., RAM memory, magnetic or optical drive or diskette, and the like.

[0121] It will be appreciated that computing device 500 depicted in FIG. 5 provides a general architecture and functionality suitable for implementing functional elements described herein or portions of the functional elements described herein.

[0122] It is contemplated that some of the steps discussed herein may be implemented within hardware, for example, as circuitry that cooperates with the processor to perform various method steps. Portions of the functions/elements described herein may be implemented as a computer program product wherein computer instructions, when processed by a computing device, adapt the operation of the computing device such that the methods and/or techniques described herein are invoked or otherwise provided. Instructions for invoking the inventive methods may be stored in tangible and non-transitory computer readable medium such as fixed or removable media or memory, and/or stored within a memory within a computing device operating according to the instructions.

[0123] Thus, various embodiments for decoding Reed-Muller (RM) encoded data may be implemented via code stored on a non-transient medium in or suitable for use with a receiver (e.g., a special purpose receiver or decoding portion therein, computing device implementing a receiver function or decoding function, and so on), by a receiver or decoding portion thereof configured to perform the method such as by executing such code, by a special purpose device configured for performing the method and so on.

[0124] Various modifications may be made to the systems, methods, apparatus, mechanisms, techniques and portions thereof described herein with respect to the various figures, such modifications being contemplated as being within the scope of the invention. For example, while a specific order of steps or arrangement of functional elements is presented in the various embodiments described herein, various other orders/arrangements of steps or functional elements may be utilized within the context of the various embodiments. Further, while modifications to embodiments may be discussed individually, various embodiments may use multiple modifications contemporaneously or in sequence, compound modifications and the like.

[0125] Although various embodiments which incorporate the teachings of the present invention have been shown and described in detail herein, those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. Thus, while the foregoing is directed to various embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. As such, the appropriate scope of the invention is to be determined according to the claims.