Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SYSTEM AND TECHNIQUE FOR SLIDING WINDOW NETWORK CODING-BASED PACKET GENERATION
Document Type and Number:
WIPO Patent Application WO/2018/183694
Kind Code:
A1
Abstract:
A method and apparatus decode packetized data in the presence of packet erasures using a finite sliding window technique. A decoder receives packets containing uncoded and coded symbols. When a packet with a coded symbol is received, the decoder determines whether a packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than a decoder sequence number, where the number w is fixed prior to encoding. When this is the case, the decoder decodes the coded symbol into one or more of the w input symbols using the coefficient vector. Decoding may use a forward error correcting (FEC) window within the finite sliding window. Decoding also may use a technique of Gaussian elimination to produce a "shifted" row echelon coefficient matrix.

Inventors:
MEDARD MURIEL (US)
WUNDERLICH SIMON (DE)
PANDI SREEKRISHNA (DE)
GABRIEL FRANK (DE)
FOULI KERIM (US)
Application Number:
PCT/US2018/025168
Publication Date:
October 04, 2018
Filing Date:
March 29, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
MASSACHUSETTS INST TECHNOLOGY (US)
CODE ON NETWORK CODING LLC (US)
UNIV DRESDEN TECH (DE)
International Classes:
H04L1/00; H04L1/18
Domestic Patent References:
WO2013116456A12013-08-08
Other References:
MOHAMMAD KARZAND ET AL: "Low delay random linear coding over a stream", 2014 52ND ANNUAL ALLERTON CONFERENCE ON COMMUNICATION, CONTROL, AND COMPUTING (ALLERTON), 1 September 2015 (2015-09-01), XP055293611, ISBN: 978-1-4799-8009-3, Retrieved from the Internet [retrieved on 20160804], DOI: 10.1109/ALLERTON.2014.7028499
BILBAO JOSU ET AL: "Network Coding in the Link Layer for Reliable Narrowband Powerline Communications", IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, IEEE SERVICE CENTER, PISCATAWAY, US, vol. 34, no. 7, 1 July 2016 (2016-07-01), pages 1965 - 1977, XP011613907, ISSN: 0733-8716, [retrieved on 20160615], DOI: 10.1109/JSAC.2016.2566058
M. KARZAND; D. J. LEITH; J. CLOUD; M. MEDARD: "Low Delay Random Linear Coding Over A Stream", ARXIV PREPRINT ARXIV:15O9.OO167, 2015
J. CLOUD; M. MEDARD: "International Conference on Wireless and Satellite Systems", 2015, SPRINGER, article "Network coding over satcom: Lessons learned", pages: 272 - 285
J. CLOUD; M. MEDARD: "Multi-path low delay network codes", ARXIV PREPRINT ARXIV:1609.00424, 2016
V. ROCA; B. TEIBI; C. BURDINAT; T. TRAN; C. THIENOT, BLOCK OR CONVOLUTIONAL AL-FEC CODES? A PERFORMANCE COMPARISON FOR ROBUST LOW-LATENCY COMMUNICATIONS, 2016
J. K. SUNDARARAJAN; D. SHAH; M. MEDARD; M. MITZENMACHER; J. BARROS: "INFOCOM 2009, IEEE", 2009, IEEE, article "Network coding meets TCP", pages: 280 - 288
P. KARAFILLIS; K. FOULI; A. PARANDEHGHEIBI; M. MEDARD: "Information Sciences and Systems (CISS), 2013 47th Annual Conference on", 2013, IEEE, article "An algorithm for improving sliding window network coding in TCP", pages: 1 - 5
M. KIM; J. CLOUD; A. PARANDEHGHEIBI; L. URBINA; K. FOULI; D. LEITH; M. MEDARD: "Network Coded TCP (CTCP", ARXIV PREPRINT ARXIV:62.2291, 2012
Attorney, Agent or Firm:
BLAU, David, E. et al. (US)
Download PDF:
Claims:
CLAI MS

What is claimed is:

1 . A method of decoding packetized data in the presence of packet erasures, the method comprising, by a decoder, repeatedly:

receiving a coded packet comprising a packet sequence number, a coefficient vector having a fixed length w, and a coded symbol encoded as a linear combination of w input symbols using the coefficient vector; determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than a decoder sequence number; and

when the packet sequence number is within the sliding window, decoding the coded symbol into one or more of the w input symbols using the coefficient vector.

2. A method according to claim 1 , wherein the fixed size of the sliding window was predetermined according to a round-trip time for data traveling between the decoder and an encoder via a data channel.

3. A method according to claim 1 , wherein receiving comprises receiving a coded packet having a packet sequence number that is out of order.

4. A method according to claim 1 , wherein receiving comprises receiving a plurality of packets including the coded packet and decoding comprises correcting an error in one of the plurality of packets according to a forward error correcting code.

5. A method according to claim 1 , wherein decoding comprises decoding according to a systematic code or a linear network code.

6. A method according to claim 1 , wherein decoding comprises setting the decoder sequence number equal to the packet sequence number of the received packet when the packet sequence number of the received packet is greater than the decoder sequence number.

7. A method according to claim 1 , wherein decoding using the coefficient vector comprises generating a packet erasure when the coefficient vector has a nonzero entry associated with an input symbol whose sequence number is outside the sliding window.

8. A method according to claim 1 , wherein decoding using the coefficient vector comprises performing Gaussian elimination on a matrix, one row of which includes the coefficient vector.

9. A method according to claim 8, wherein decoding using the coefficient vector comprises, prior to performing the Gaussian elimination, deleting each row of the matrix that has a non-zero coefficient entry associated with an input symbol whose sequence number is outside the sliding window.

10. A method according to claim 8, wherein performing the Gaussian elimination comprises pivoting on the column of the matrix whose index equals the decoder sequence number modulo the size of the sliding window.

1 1. A method according to claim 1 , further comprising the decoder providing feedback to an encoder via a data channel, thereby enabling the encoder to transmit, via the data channel, one or more packets reencoding any data that the decoder did not decode due to a packet erasure.

12. A tangible, computer-readable storage medium, in which is non-transitorily stored computer program code that, when executed by a computer processor, performs a method of decoding packetized data in the presence of packet erasures, the method comprising, by a decoder, repeatedly:

receiving a coded packet comprising a packet sequence number, a coefficient vector having a fixed length w, and a coded symbol encoded as a linear combination of w input symbols using the coefficient vector;

determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than a decoder sequence number; and when the packet sequence number is within the sliding window, decoding the coded symbol into one or more of the w input symbols using the coefficient vector.

13. A storage medium according to claim 12, wherein the fixed size of the sliding window was predetermined according to a round-trip time for data traveling between the decoder and an encoder via a data channel.

14. A storage medium according to claim 12, wherein receiving comprises receiving a coded packet having a packet sequence number that is out of order.

15. A storage medium according to claim 12, wherein receiving comprises receiving a plurality of packets including the coded packet and decoding comprises correcting an error in one of the plurality of packets according to a forward error correcting code.

16. A storage medium according to claim 12, wherein decoding comprises decoding according to a systematic code or a linear network code.

17. A storage medium according to claim 12, wherein decoding comprises setting the decoder sequence number equal to the packet sequence number of the received packet when the packet sequence number of the received packet is greater than the decoder sequence number.

18. A storage medium according to claim 12, wherein decoding using the coefficient vector comprises generating a packet erasure when the coefficient vector has a non-zero entry associated with an input symbol whose sequence number is outside the sliding window.

19. A storage medium according to claim 12, wherein decoding using the coefficient vector comprises performing Gaussian elimination on a matrix, one row of which includes the coefficient vector.

20. A storage medium according to claim 19, wherein decoding using the coefficient vector comprises, prior to performing the Gaussian elimination, deleting each row of the matrix that has a non-zero coefficient entry associated with an input symbol whose sequence number is outside the sliding window.

21. A storage medium according to claim 19, wherein performing the

Gaussian elimination comprises pivoting on the column of the matrix whose index equals the decoder sequence number modulo the size of the sliding window.

22. A storage medium according to claim 12, the method further comprising the decoder providing feedback to an encoder via a data channel, thereby enabling the encoder to transmit, via the data channel, one or more packets reencoding any data that the decoder did not decode due to a packet erasure.

23. A decoder for decoding packetized data in the presence of packet erasures, the decoder comprising:

a buffer for receiving a coded symbol from a coded packet comprising a

packet sequence number, a coefficient vector having a fixed length w, and the coded symbol encoded as a linear combination of w input symbols using the coefficient vector;

a register for storing a decoder sequence number;

a determining unit for determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than the decoder sequence number stored in the register; and

a decoding module for decoding, when the determining unit determines that the packet sequence number is within the sliding window, the received coded symbol into one or more of the w input symbols using the coefficient vector.

24. A decoder according to claim 23, wherein the buffer can receive a plurality of symbols including the coded symbol, from a corresponding plurality of packets including the coded packet, and the decoding module can correct an error in one of the plurality of packets according to a forward error correcting code.

25. A decoder according to claim 23, wherein the decoding module can decode according to a systematic code or a linear network code.

26. A decoder according to claim 23, wherein the decoding module can write the packet sequence number of the received packet into the register when the packet sequence number of the received packet is greater than the decoder sequence number stored in the register.

27. A decoder according to claim 23, wherein the decoding module can generate a packet erasure when the coefficient vector has a non-zero entry associated with an input symbol whose sequence number is outside the sliding window.

28. A decoder according to claim 23, further comprising a memory for storing a matrix, wherein the decoding module can store the coefficient vector in one row of the matrix and performing Gaussian elimination on the matrix.

29. A decoder according to claim 28, wherein the decoding module can delete from the memory, prior to performing the Gaussian elimination, each row of the matrix that has a non-zero coefficient entry associated with an input symbol whose sequence number is outside the sliding window.

30. A decoder according to claim 28, wherein the decoding module can perform the Gaussian elimination by pivoting on the column of the matrix whose index equals the decoder sequence number modulo the size of the sliding window.

31. A decoder according to claim 23, further comprising a feedback module capable of providing feedback to an encoder via a data channel, thereby enabling the encoder to transmit, via the data channel, one or more packets reencoding any data that the decoder did not decode due to a packet erasure.

32. A decoder according to claim 23, wherein the decoding module comprises an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA).

33. A decoder for decoding packetized data in the presence of packet erasures, the decoder comprising: receiving means for receiving a coded symbol from a coded packet comprising a packet sequence number, a coefficient vector having a fixed length w, and the coded symbol encoded as a linear combination of w input symbols using the coefficient vector;

storing means for storing a decoder sequence number;

determining means for determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than the decoder sequence number stored in the register; and

decoding means for decoding, when the determining unit determines that the packet sequence number is within the sliding window, the received coded symbol into one or more of the w input symbols using the coefficient vector.

Description:
SYSTEM AND TECHNIQUE FOR SLIDING WINDOW NETWORK CODING-BASED PACKET GENERATION

BACKGROUND

[0001] Sliding window network coding encoders are known in the art. Sliding window refers to a technique in which linear coding is applied to a window of packets to generate a coded packet. In general, a window size utilized in such encoder dynamically varies between 1 and a maximum window size of N where N can, theoretically, be infinitely large. Those of ordinary skill in the art will appreciate how to select the value of N in practical systems.

[0002] In general, a sliding window encoder may operate in the following manner: first, the sliding window encoder is fed with new data. In response to the data provided thereto, the encoder increases the window size of the sliding window encoder. A sender sends a packet to a receiver and upon reception of the packet the receiver sends an ACK packet back to the sender. The sliding window encoder receives the ACK packet and in response thereto may reduce the size of the sliding window (hence the reason the technique is referred to as a sliding window technique - i.e. new packets are added to the encoder and old ones are removed). A sliding window may be applied over a generation (i.e. over a group of packets) or can be generation-less.

[0003] As is also known, some sliding window coders apply random linear network coding (RLNC) as a convolutional code. A general study of such a sliding window code and its properties was provided by M. Karzand, D. J. Leith, J. Cloud, and M. Medard, "Low Delay Random Linear Coding Over A Stream," arXiv reprint arXiv.1509.00167, 2015. One possible implementation for SATCOM was studied in J. Cloud and M. Medard, "Network coding over satcom: Lessons learned," in

International Conference on Wireless and Satellite Systems. Springer, 2015, pp. 272-285 and for multiple combined paths in J. Cloud and M. Medard, "Multi-path low delay network codes," arXiv preprint arXiv: 1609.00424, 2016. These sliding window approaches show superior delay properties for streaming applications as compared to generation-based codes. However, such prior art approaches generally assume an infinite sliding window, or a sliding window of dynamic size, which is closed only by feedback. [0004] Some prior art techniques (e.g. see V. Roca, B. Teibi, C. Burdinat, T. Tran, and C. Thienot, "Block or Convolutional AL-FEC codes? a performance comparison for robust low-latency communications," 2016) suggest the use of convolutional codes over block codes for delay sensitive applications, based upon a comparison of Reed-Solomon and a finite sliding window random linear code (RLC). However, such techniques don't describe how such a coding process can be implemented efficiently, and don't consider feedback.

[0005] TCP/NC is described in J. K. Sundararajan, D. Shah, M. Medard, M.

Mitzenmacher, and J. Barros, "Network coding meets TCP," in INFOCOM 2009, IEEE. IEEE, 2009, pp. 280-288. This technique adds an intermediate layer of network coding between TCP and IP, including feedback in the process and uses a general form of sliding window where packets are combined in an unrestricted manner and coding is not limited by structures such as blocks and generations. In such forms of sliding window, the coding window (the packets currently being buffered for potential coding at the encoder) is only closed by acknowledgement signals (ACKs), and can possibly be very large - at least in the range of the bandwidth delay product, as coding is always performed over the whole congestion window (i.e., the packets currently being buffered for potential retransmission at the sender).

[0006] Another technique described in Karafillis et. al. improves on the above TCP/NC technique by sending additional redundant packets - see P. Karafillis, K. Fouli, A. ParandehGheibi, and M. Medard, "An algorithm for improving sliding window network coding in TCP," in Information Sciences and Systems (CISS), 2013 47th Annual Conference on. IEEE, 2013, pp. 1-5. With the feedback, the receiver signals the number of lost degrees of freedom. The sender uses this information to send additional packets at specific intervals, e.g. every 13 packets.

[0007] Kim et. al. describes Coded TCP ("CTCP") see M. Kim, J. Cloud, A.

ParandehGheibi, L. Urbina, K. Fouli, D. Leith, and M. Medard, "Network Coded TCP (CTCP)," arXiv preprint arXiv:62.2291 , 2012. The CTCP technique is an integration of network coding inside of TCP. This approach utilizes sequence numbers for packets and a systematic code. Feedback is sent only at the end of each block. The block size may be set at the bandwidth delay product, which can result in very large generations.

[0008] Block codes have low complexity but introduce delays, as the decoder needs to wait for a sufficient number of packets to arrive to decode each block. Conventional sliding window techniques, on the other hand, offer low latency since they can receive packets and decode simultaneously. This comes at the price of potentially high complexity from overhead, feedback, and decoding operations that involve potentially many packets. This invention combines the merits of both block (low complexity) and sliding window methods (low latency). This is done by strictly limiting the number of packets in the sliding window, by using systematic coding, and by adapting the sliding window coding architecture (e.g., packet format, coefficient matrix) to reduce the number of linear operations required in coding operations (i.e., lower complexity). The result is a sliding window code (low latency compared to block codes), but with a static window size which allows implementation using existing block encoders/decoders tools. This leads to similar packet loss properties and very high performance of the encoders and decoders, as existing block encoders/decoders have been optimized for performance on various CPU architectures already. Another advantage of the limited sliding window is to reduce buffering needs at all coding nodes.

SUMMARY

[0009] In contrast to prior art techniques, described herein is a finite sliding window approach of fixed size that can be efficiently implemented. The systems and techniques described herein combine the low latency of sliding window codes with the efficiency and reliability characteristics of generation-based codes. Sliding windows reduce packet latency compared to block codes but are potentially more complex (see comment above). The latency gains come from the fact that sliding window decoders can decode while receiving coded data, while block decoders cannot perform those operations simultaneously. This technique allows the use of a sliding window while controlling complexity. This technique also keeps the tunability and flexibility advantages of conventional linear coding-based sliding window methods, whereby parameters such as the encoding/decoding window sizes or the code rate can be modified dynamically. This allows multiple trade-offs (e.g., delay vs. reliability / throughput / complexity).

[0010] Also, in contrast to prior art techniques, the systems and techniques described herein utilize a systematic code within the finite window design.

Systematic codes send data uncoded first, reducing decoding and recoding complexity than non-systematic coding systems.

[0011] Also, in contrast to prior art techniques, the systems and techniques described herein may utilize a forward erasure code window (i.e. a "FEC window"). The FEC window further restricts the number of symbols that can be coded and recoded. This reduces the number of linear operations required to recode or decode symbols, thus further reduces reducing the computational complexity of the operations and allowing for easier recoding. The use of a FEC window that is smaller than the coding window allows for the use of feedback schemes to re-include lost packets from the coding window into the output coded packets.

[0012] Performance characteristics of the described systems and techniques may be improved relative to prior art techniques. For example, the described systems and techniques may be improved relative to prior art techniques in terms of delay, since in accordance with the concepts described herein, redundant packets may be sent often rather than only at the end of (possibly large) generations as is done in in prior art techniques.

[0013] Further, the described systems and techniques could replace the block coded approach in coded TCP ("CTCP") and thus be incorporated into CTCP. While providing absolute delivery reliability may be costly, unlike traditional sliding window coding approaches (e.g., in CTCP) the disclosed concepts may be used to provide low latency and low complexity.

[0014] It should also be appreciated that the approach described herein could also be implemented on a Link Layer, assuming another protocol like TCP which provides reliability on top. In such a case, the technique described herein would help in reducing the losses witnessed by TCP, and improve delay compared to other automatic repeat request (ARQ) schemes. [0015] With this arrangement, a generation-less sliding window code is provided. Various embodiments provide, among other advantages: a modular design allowing easy integration with commercial software libraries and applications while keeping the high performance of commercial software library bloc codes; coping with various network requirements; simplifying or eliminating the handling of multiple packet generations; providing low delay and high reliability for upper protocol layers; and providing tradeoffs between delay, efficiency, and reliability.

[0016] Therefore, a first embodiment of the concepts, systems, and techniques described herein provides a method of decoding packetized data in the presence of packet erasures comprising a decoder repeatedly performing several processes. A first process includes receiving a coded packet comprising a packet sequence number, a coefficient vector having a fixed length w, and a coded symbol encoded as a linear combination of w input symbols using the coefficient vector. A second process includes determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than a decoder sequence number. And a third process includes, when the packet sequence number is within the sliding window, decoding the coded symbol into one or more of the w input symbols using the coefficient vector.

[0017] The method may include the following variations in any combination. The fixed size of the sliding window may be predetermined according to a round-trip time for data traveling between the decoder and an encoder via a data channel. The coded packet may have a packet sequence number that is out of order. Receiving may include receiving a plurality of packets including the coded packet and decoding may include correcting an error in one of the plurality of packets according to a forward error correcting code. Moreover, decoding may include using a systematic code or a linear network code. Decoding may include setting the decoder sequence number equal to the packet sequence number of the received packet when the packet sequence number of the received packet is greater than the decoder sequence number. Decoding using the coefficient vector may include generating a packet erasure when the coefficient vector has a non-zero entry associated with an input symbol whose sequence number is outside the sliding window. Decoding also may include performing Gaussian elimination on a matrix, one row of which includes the coefficient vector. In this embodiment, prior to performing the Gaussian elimination, decoding may include deleting each row of the matrix that has a nonzero coefficient entry associated with an input symbol whose sequence number is outside the sliding window. Alternately or also in this embodiment, performing the Gaussian elimination comprises pivoting on the column of the matrix whose index equals the decoder sequence number modulo the size of the sliding window.

[0018] The decoder may further provide feedback to an encoder via a data channel, thereby enabling the encoder to transmit, via the data channel, one or more packets reencoding any data that the decoder did not decode due to a packet erasure.

[0019] Another embodiment of the concepts, systems, and techniques described herein is a tangible, computer-readable storage medium, in which is non-transitorily stored computer program code that, when executed by a computer processor, performs the above-described method or any of its variants or embellishments.

[0020] Yet another embodiment of the concepts, systems, and techniques described herein is a decoder for decoding packetized data in the presence of packet erasures. The decoder includes a buffer for receiving a coded symbol from a coded packet comprising a packet sequence number, a coefficient vector having a fixed length w, and the coded symbol encoded as a linear combination of w input symbols using the coefficient vector. The decoder also includes a register for storing a decoder sequence number. The decoder further includes a determining unit for determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than the decoder sequence number stored in the register. The decoder also includes a decoding module for decoding, when the determining unit determines that the packet sequence number is within the sliding window, the received coded symbol into one or more of the w input symbols using the coefficient vector.

[0021] The decoder embodiment may be varied in like manner to the method described above. Thus, the buffer may receive a plurality of symbols including the coded symbol, from a corresponding plurality of packets including the coded packet, and the decoding module may correct an error in one of the plurality of packets according to a forward error correcting code. The decoding module may decode according to a systematic code or a linear network code. The decoding module may write the packet sequence number of the received packet into the register when the packet sequence number of the received packet is greater than the decoder sequence number stored in the register. The decoding module may generate a packet erasure when the coefficient vector has a non-zero entry associated with an input symbol whose sequence number is outside the sliding window.

[0022] The decoder may further comprise a memory for storing a matrix, wherein the decoding module may store the coefficient vector in one row of the matrix and perform Gaussian elimination on the matrix. In this embodiment, the decoding module may delete from the memory, prior to performing the Gaussian elimination, each row of the matrix that has a non-zero coefficient entry associated with an input symbol whose sequence number is outside the sliding window. Alternately or in addition, the decoding module may perform the Gaussian elimination by pivoting on the column of the matrix whose index equals the decoder sequence number modulo the size of the sliding window.

[0023] The decoder may further comprise a feedback module capable of providing feedback to an encoder via a data channel, thereby enabling the encoder to transmit, via the data channel, one or more packets reencoding any data that the decoder did not decode due to a packet erasure. The decoding module may include an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA).

[0024] Still another embodiment of the concepts, systems, and techniques described herein is a different decoder for decoding packetized data in the presence of packet erasures. This decoder comprises receiving a coded symbol from a coded packet comprising a packet sequence number, a coefficient vector having a fixed length w, and the coded symbol encoded as a linear combination of w input symbols using the coefficient vector. This decoder also includes storing means for storing a decoder sequence number. The decoder further includes determining means for determining whether the packet sequence number is within a sliding window of w consecutive sequence numbers that are no greater than the decoder sequence number stored in the register. Also, the decoder includes decoding means for decoding, when the determining unit determines that the packet sequence number is within the sliding window, the received coded symbol into one or more of the w input symbols using the coefficient vector.

[0025] Persons having ordinary skill in the art may recognize other ways to embody the concepts, systems, and techniques described herein.

BRIEF DESCRIPTION OF THE DRAWINGS

[0026] The manner and process of making and using the disclosed embodiments may be appreciated by reference to the drawings, in which:

[0027] Fig. 1 is a diagram illustrating an exemplary system in which the principles herein may be embodied;

[0028] Fig. 2 is a diagram illustrating coded packets as they might be transmitted by an encoder;

[0029] Fig. 3 is a flow diagram for the operation of an encoder in accordance with an embodiment;

[0030] Fig. 4 is a diagram illustrating functional components in an encoder according to an embodiment of the concepts, systems, and techniques described herein;

[0031] Figs. 5 and 5A are a flow diagram for the operation of a decoder in accordance with an embodiment;

[0032] Figs. 6, 6A, and 6B are a series of diagrams illustrating three different cases for inserting coefficients relating to a new coded symbol into a coefficient matrix;

[0033] Figs. 7 and 7A are diagrams illustrating different Gaussian elimination strategies;

[0034] Fig. 8 is a diagram illustrating functional components in a decoder according to an embodiment of the concepts, systems, and techniques described herein;

[0035] Fig. 9 is a flow diagram for the operation of a recoder in accordance with an embodiment;

[0036] Fig. 10 is a diagram illustrating functional components in a recoder according to an embodiment of the concepts, systems, and techniques described herein; [0037] Figs. 1 1 , 1 1 A, and 1 1 B are a series of diagrams illustrating examples for compared coding schemes having a code rate R = 2/3;

[0038] Fig. 12 is a diagram illustrating a generalized S-ARQ;

[0039] Fig. 13 is a diagram illustrating a sliding window S-ARQ proposal; and

[0040] Figs. 14, 14A, 14B, and 14C are a series of diagrams illustrating a coefficient matrix for different retransmission schemes.

DETAILED DESCRIPTION

[0041] The concepts, systems, and techniques described herein may be used to perform erasure coding between coders using a finite sliding window. Before describing these concepts, systems, and techniques in detail, introductory concepts and terminology as used herein are explained.

[0042] A "symbol" is a vector of elements in the finite field GF(q). In general q is a prime power; for binary data communication, q is a power of 2.

[0043] A "sequence number" is a number that uniquely identifies a source symbol in a sequence of source symbols.

[0044] A "sliding window" is a sequence of consecutive symbols having a fixed window size. An "encoder sliding window" has fixed window size denoted w e , while a "decoder sliding window" has fixed window size denoted Wd. The sizes of the sliding windows in the encoder w e and in the decoder Wd may be different.

[0045] A "coded symbol" is a vector of elements in GF(q) that is a linear combination of at most w e input symbols, where each coefficient of the linear combination is an element in GF(q).

[0046] An "uncoded packet" is a unit of data to be exchanged between two coders that includes a sequence number and an uncoded symbol.

[0047] A "coded packet" is a unit of data to be exchanged between two coders that includes a sequence number, a coefficient vector of length w e , and a coded symbol encoded as a linear combination using the coefficients included in the coefficient vector.

[0048] An "erasure" is a missing element of a sequence.

[0049] "Erasure coding" means coding one or more input symbols into a packet sequence that may include erasures.

[0050] With reference to Fig. 1 , a system includes several coders 2a to 2N connected by a network 9. The coders may be implemented as computers, or using special purpose hardware, firmware, software, or a combination of these. Thus, for example, coder 2a may be implemented as an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA) or similar technology. Coders may be present at any node in the network and are not restricted to the edge of the network. Each coder 2a to 2N may be made from identical or different components, and any number of such coders may be present in a system in accordance with an embodiment of the concepts, systems, and techniques described herein.

[0051] The coders 2a to 2N communicate with each other using coded and uncoded packets. Manipulation of elements in GF(q) further to such communication may be performed using any hardware or software known in the art. For example, the Fifi library published by Steinwurf ApS of Denmark may be used for this purpose.

[0052] Each coder 2a to 2N may be used for encoding packets, decoding packets, recoding packets, or any combination of these. The most general case is that a coder performs all three of these functions. The operation of exemplary coder 2a is therefore described below in detail. However, various embodiments may include a coder (for example coder 2c) that only transmits packets, and thus requires only an encoder (e.g. encoder 1 15c, omitting decoder 1 104c and recoder 1 106c). Other embodiments may include a coder (for example coder 2c) that only receives packets, and thus requires only a decoder (e.g. decoder 1 104c, omitting encoder 1 15c and recoder 1 106c). Still other embodiments may include a coder (for example coder 2c) that only forwards packets, and thus requires only a recoder (e.g. recoder 1 106c, omitting encoder 1 15c and decoder 1 104c). [0053] Coder 2a has an encoder 4a, which takes as input a sequence of source symbols and generates as output a corresponding sequence of coded or uncoded packets. Thus, coder 2a is suitable to transmit information as a channel source. Fig. 2 is a diagram illustrating coded packets as they might be transmitted by encoder 4a. The operation of encoder 4a is illustrated in detail in Fig. 3. An embodiment of encoder 4a is illustrated in detail in Fig. 4.

[0054] Coder 2a also has a decoder 6a, which takes as input a sequence of coded or uncoded packets and generates as output a corresponding sequence of source symbols, optionally in order. Thus, coder 2a is suitable to receive information as a channel sink. The operation of decoder 6a is illustrated in detail in Fig. 5 with references to Figs. 6 and 7. An embodiment of decoder 6a is illustrated in detail in Fig. 8.

[0055] Coder 2a also has a recoder 8a, which takes as input a sequence of coded or uncoded packets and generates as output a corresponding sequence of coded and uncoded packets, where the output coded packets may be recoded packets; that is, coded packets whose coding is different from input coded packets. Thus, coder 2a is suitable to forward information from a channel source to a channel sink using, for instance, a linear network code. The operation of recoder 8a is illustrated in detail in Fig. 9. An embodiment of recoder 8a is illustrated in detail in Fig. 10.

[0056] Referring now to Fig. 2, details of an embodiment are illustrated by reference to some coded packets 20a - 20e as they could have been sent by an encoder (such as encoder 4a). Each coded packet includes a payload 12, such as a coded symbol, and a coding header 14, 16, 18. The coding header includes a packet sequence number ("PSN") 14. The PSN 14 represents the largest sequence number of an input symbol used in the encoding of the coded symbol as a linear combination of input symbols. Thus, according to one transmission sequence, the first PSN 14a may be 0, the second PSN 14b may be 1 , and so on. However, in accordance with various embodiments, the decoder may receive the coded packets out of order, or the encoder may transmit coded packets that redundantly code input symbols using the same PSN (for example, over a data channel subject to many erasures). [0057] In some embodiments, the coding header optionally includes a coefficient vector 16, 18. That is, in some embodiments, when an uncoded packet is transmitted the coefficient vector 16, 18 is omitted or replaced by other data to provide more efficient use of the data channel between the encoder and the decoder. Alternately, the coefficient vector 16, 18 may be transmitted but contains a single, non-zero entry equal to 1 in GF(q). Regardless, when the coefficient vector is transmitted it has a predetermined, fixed size equal to the window length w e . The coefficient vector may be used to recover input symbols in the presence of packet erasures, as described below.

[0058] In accordance with embodiments, the indices of the coefficient vector are interpreted in a special way by the decoder. A decoder index is used to keep track of which sequence numbers are within the sliding window, and which are not. To assist in understanding this concept, Fig. 2 shows the decoder index and divides the coefficient vector into an older component 16 and a newer component 18. The decoder index moves from left to right through coefficient vector, so the coefficient index associated with the oldest (i.e. least) sequence number in the sliding window is 16N, and the coefficient indices 16a to 16e refer to newer sequence numbers from left to right. The next newest sequence number is associated with coefficient index 18a, and these become newer still from 18b-18k, the latter being the decoder index.

[0059] In various embodiments, the decoder index tracks the source symbol with the largest sequence number received by the decoder in a data packet. In this case, the decoder index may be computed as this largest sequence number modulo the fixed window size w e of the encoder. As noted above, the other indices represent older symbols. This means that an index is re-used every w e source symbols, and special care must be taken to handle the re-usage correctly.

[0060] The latest coefficient indices present in newer component 18 of the coefficient vector may be further divided to recognize a forward error correction ("FEC") window 22 having a width "f . The FEC window 22 comprises the latest transmitted symbol and f - 1 symbols before that. These symbols are used for active forward coding, e.g. opportunistic coding even without feedback. If the FEC window 22 comprises 4 symbols, then the symbols associated to coefficients 18g-18k may be used to perform FEC. The feedback recovery window 24, 26 contains coefficients associated to older symbols, which should be only used when required, e.g. after the decoder detects that one or more of these symbols is missing or cannot be decoded.

[0061] Dividing recovery into an FEC window 22 and a feedback recovery window 24, 26 is advantageous. Doing so keeps the FEC window small enough to handle typical expected bursts of transmitted packets (e.g. 16 packets), while keeping the feedback recovery window 24, 26 large enough to be able to recover packets transmitted during at least one round-trip time ("RTT"), including all buffering (e.g. 128 - 256 packets). Thus, the encoder and decoder may negotiate the sliding window size w e by exchanging data to measure the RTT, thereby predetermining a fixed size for w e prior to the encoder transmitting the first coded packet. A decoding method using the FEC window 22 is described below and contrasted with the prior art in connection with Figs. 1 1 -14C.

[0062] A particularly useful application of these concepts occurs when the network 9 comprises a mesh network, or other network for which multiple paths or multiple channels exist between the coders 2a to 2N. The general concept is to use sliding window, with or without feedback, over multiple disjoint channels or paths, e.g. LTE + Wi-Fi. To apply multi-path, it is important to include provisions that consider different packet latencies, out-of-order delivery, sliding window size, and so on.

[0063] In multi-path operation, the encoder or any recoder can decide to send packets through more than one path to the destination decoder or to an intermediate node. The node initiating multipath operation, typically the source node containing the encoder, is henceforth termed the "multi-path node." The decision over which packets to transmit over which path can be based on information the multi-path node has received or inferred on the various paths. Some multi-path mechanisms include, but are not, limited to, the following.

[0064] Multi-path nodes may allocate packets to paths based on path loss or path load determination. Such a determination may be based on state information communicated by another node (e.g., destination, downstream nodes, network management) or inferred by the multi-path node itself from source-destination traffic (e.g., monitoring feedback and deducing lost packets on each path) or some other available metric (e.g., CSI).

[0065] The multi-path node may send systematic (i.e., source, uncoded) packets through the faster or more reliable path to decrease decoding complexity and speed up the connection, as this ensures that more source packets reach the decoder in a shorter time, requiring less linear operations for decoding and enabling faster delivery of source packets.

[0066] The multi-path node may use inferred (e.g. , CSI) or received information about path latency (e.g. feedback) to refrain from using paths when their associated latency is deemed to cause packets to be useless upon receipt by the destination (i.e., fall outside the decoding window upon receipt).

[0067] Intermediate nodes on different paths may be selective in their transmissions of received packets, in their retransmissions of received packets, or in their transmission of coded/recoded packets. A packet sampling process (i.e., send one packet for every M received packets) can be applied, implementing a target sampling rate (i.e., 1/M here).

[0068] This packet sampling rate can be communicated by another node (e.g., source, destination, network management) or inferred by the multi-path node itself from its position in the transfer (e.g., closer nodes to the destination send at a lower rate), from source-destination traffic, from feedback from neighboring nodes, or from some other available metric.

[0069] Figs. 3, 5, 5A, and 1 1 are flow diagrams illustrating processing that can be implemented within the system 2 (FIG. 1) . Rectangular elements (typified by element 30 in FIG. 3) , are sometimes denoted herein as "processing blocks," represent computer software instructions or groups of instructions. Diamond shaped elements (typified by element 40 in FIG. 3) , are sometimes denoted herein as "decision blocks," and represent computer software instructions, or groups of instructions, which affect the execution of the computer software instructions represented by the processing blocks.

[0070] Alternatively, the processing and decision blocks may represent functions performed by functionally equivalent circuits such as a digital signal processor circuit or an application specific integrated circuit (ASIC). The flow diagrams do not depict the syntax of any particular programming language. Rather, the flow diagrams illustrate the functional information one of ordinary skill in the art requires to fabricate circuits or to generate computer software to perform the processing required of a particular apparatus or device. It should be noted that many routine program elements, such as initialization of loops and variables and the use of temporary variables are not shown. It will be appreciated by those of ordinary skill in the art that unless otherwise indicated, the particular sequence of blocks described in the flow diagrams is illustrative only and can be varied without departing from the spirit of the concepts, structures, and techniques described. Thus, unless otherwise stated the blocks described below are unordered meaning that, when possible, the functions represented by the blocks can be performed in any convenient or desirable order.

[0071] Turning now to Fig. 3, an encoding method in accordance with an encoder embodiment (for example, the encoder shown in Fig. 1 ) begins with an initialization process 30. Illustrative embodiments use this initialization process 30 to reset an encoder sequence number to zero, for example, or to exchange data with a corresponding decoder to determine a round-trip time ("RTT") and thereby predetermine a fixed window size w e . A person having ordinary skill in the art may appreciate other initialization that may be performed, and conventional methods for doing so.

[0072] The encoding method continues with a receiving process 32 that begins the main encoding loop. In this receiving process 32 the encoder receives a new input or source symbol. For example, the source symbol may be received as data from a computer network, or from an application program using an application programming interface (API) or using other conventional means.

[0073] Then, incrementing process 34 increments the encoder sequence number, and the encoder associates the newly received source symbol with the new sequence number. The association may include storing the source symbol in an input buffer in a memory location according to the sequence number, as described below in more detail. The incrementing process 34 enables the encoder to generate coded symbols.

[0074] Next, a packet generating process 36 generates an uncoded packet by attaching a header containing the current encoder sequence number. In one embodiment, most packets transmitted by the encoder are uncoded; that is, they lack a coefficient vector because they include an uncoded symbol. This embodiment permits relatively high code rates to be achieved. However, in another embodiment even uncoded packets include a coding header having a coefficient vector. In this embodiment, the coefficient vector has a length equal to the finite sliding window size We. Because the symbol is uncoded, the coefficient vector contains exactly one non-zero element of the finite field GF(q), for example the value 1 . Illustratively, the non-zero element is located at position (ESN mod w e ), where ESN is the value of the encoder sequence number. Conventional means including a buffer or other memory may be used to generate the uncoded packet.

[0075] The method continues with a queueing process 38 that places the new uncoded packet hence formed in an output buffer. In illustrative embodiments, the method of Fig. 3 is performed asynchronously; that is, new packets are generated as quickly as source symbols are received, rather than waiting for existing packets to be delivered to the data channel. Thus, the queueing process 36 transfers the new uncoded packet to a shared output buffer accessible to another process for eventual delivery to the network. Of course, an alternate embodiment may operate in a synchronous manner, waiting until each packet is confirmed to have been transmitted toward the decoder before proceeding. Such asynchronous transfers may be performed using conventional means.

[0076] In accordance with the embodiment illustrated in Fig. 3, the method continues with a code rate checking process 40 that checks whether the

transmission of one or more coded packets is required or desirable. The code rate checking process 40 can be performed by tracking a ratio of uncoded to total packets generated (i.e., code rate) to ensure that a given code rate is achieved. In one embodiment, the code rate checking process 40 returns a YES result when the current encoder sequence number is a multiple of a number N of uncoded packets to be sent for each coded packet, and NO otherwise. In this case, the code rate is calculated to be N / N+1 . Of course, it should be clear how other code rates might be achieved using different tests.

[0077] If the transmission of a coded packet is not required, the main loop returns to its starting state, waiting for another source symbol in receiving process 32. However, if the transmission of a coded packet is required, the method proceeds to a coded packet generating process 42 that generates a coded packet. In illustrative embodiments, the encoder uses either systematic coding (that is, it includes the latest source symbol in a generated coded symbol) or sends redundant packets from within the FEC window illustrated in Fig. 2. The coded packet generating process 42 therefore includes generating a coefficient vector having one coefficient in GF(q) per source symbol in the sliding window to be coded.

[0078] Since the encoder uses a finite window, the number of packets linearly combined is usually equal to w e , except in the initial stages of the transmission when there are fewer source symbols in the finite window (that is, when the encoder sequence number is less than w e ). The coded packet generating process 42 then linearly combines the source symbols in the finite sliding window using the generated coefficients and forms the coded packet by adding a coding header to the coded symbol.

[0079] The coding header contains a fixed number w e of coefficients and a packet sequence number (that may differ from the encoder sequence number). In one embodiment, the coefficient location is reused in a manner that makes the source symbol S associated with each coefficient implicit. This is done by placing the coefficient associated to source symbol S in position (ESN(S) mod w e ), where ESN(S) is the encoder sequence number associated with the source symbol S. Any empty coefficient slots get a zero coefficient. The coefficient vectors hence formed become the rows of the coefficient matrix (R) at the decoder, where the coefficient matrix (R) is the matrix used by the decoder to perform decoding, as detailed below.

[0080] Once the coded packet has been generated, a queueing process 44 places it in the output buffer. This queueing process 44 is like the queueing process 38 and may be performing in like manner. The method continues by determining whether to transmit an additional coded packet in the code rate checking process 40. If no further coded packet is required, then the method concludes its loop by returning to receiving process 32, in which it awaits the next source symbol.

[0081] Referring now to Fig. 4, an embodiment of the concepts described herein is an encoder 50 having several functional components that include a finite window control module 52, a symbol buffer 54, an encoder sequence number register 56, a random linear network coding ("RLNC") encoder 58, a packet generator 60, and an output buffer 62. The operation of these components is now described.

[0082] The finite window control module 52 is the principal mechanism for controlling the encoder 50. The finite window control module 52 receives source symbols and stores them in a symbol buffer 54. The symbol buffer 120 contains w e symbols, where w e is the size of the finite encoder sliding window, and the finite window control module 52 evicts stale symbols as they pass outside the window. This is done to respect the coding window size and control decoding complexity. Feedback may be received from a decoder that was unable to decode a symbol despite using the techniques described herein. When feedback is received, it will usually refer to a symbol having a sequence number that is less than that kept by the encoder. In this case, feedback information from symbols outside of the encoder's window size are discarded. The finite window control module 52 also stores and updates the encoder sequence number in an encoder sequence number register 56, as described in connection with Fig. 3.

[0083] The encoder 50 also includes an RLNC encoder 58 to compute the coded symbols found in a coded packet, for example as described above in process 42. For example, the Kodo library published by Steinwurf ApS of Denmark may be used for this purpose under the direction of the finite window control module 52.

[0084] A packet generator 60 creates uncoded and coded packets for

transmission to a decoder or recoder. For uncoded packets, the packet generator 60 joins an uncoded symbol obtained from the finite window control module 52 to the associated sequence number stored in the encoder sequence number register 56. For coded packets, the packet generator 60 joins a coded symbol received from the RLNC encoder 58 to a coding header. The coding header includes the coefficient vector used to encode the coded symbol, obtained from the RLNC encoder 58. The coding header also includes a packet sequence number, i.e. the largest sequence number of any input symbol used in the encoding of the coded symbol as a linear combination of input symbols. [0085] Finally, the encoder 50 includes an output buffer 62 that performs output functions (i.e., transmission, buffering, delivery to a lower protocol layer, and so on). The output buffer 62 receives generated packets from the packet generator 60. It should be appreciated that any or all the functional components in the encoder 50 may be implemented in hardware, firmware, software, or any combination of such technologies.

[0086] Referring now to Fig. 5, a decoding method in accordance with a decoder embodiment (for example, the decoder shown in Fig. 1 ) begins with an initialization process 91 that initializes a matrix used in decoding, as described in more detail below. As with the encoder initialization process 30, the decoder initialization process 91 also may initialize state variables, including setting a decoder sequence number to zero. Also, the decoder may exchange data with a corresponding encoder to determine a round-trip time ("RTT") and thereby determine the fixed sliding window size w e prior to exchanging data using the finite sliding window techniques described herein. A person having ordinary skill in the art may appreciate other initialization that may be performed, and conventional methods for doing so.

[0087] The matrix used in decoding has Wd rows that each include a coefficient vector that indicates coefficients used to encode a different coded symbol, where Wd has a fixed value and Wd is greater than or equal to w e . The first w e columns of such a matrix form the coefficient matrix (R) and may be reduced to an identity matrix by performing row operations (e.g. multiplying all coefficients in a row by the same number and adding or subtracting rows). As is known in the art, if corresponding row operations are performed on the associated coded symbols, this process results in decoded source symbols.

[0088] Some embodiments carry out row operations using a Wd-by-1 symbol matrix and a separate Wd-by-w e coefficient matrix (R) whose Wd rows are coefficient vectors (of length w e ) of corresponding coded symbols. Other embodiments coordinate row operations using an "augmented" Wd-by-(w e +1 ) matrix, in which the first We columns form the coefficient matrix (R) and comprise the w e coefficient vectors (for each of the Wd symbols), whereas the last column (index w e +1 ) comprises the associated symbols. Regardless of its form, this matrix is referred to hereinafter as the "coordinate matrix." [0089] The decoding method of Fig. 5 continues with a receiving process 92 that begins the main decoding loop. In this receiving process 92 the decoder receives a new packet. For example, the packet may be received as data from a computer network, or from an application program using an application programming interface (API) or using other conventional means.

[0090] Next, the decoding method determines in decision processes 93, 95, and 96 a relationship between the received packet and the decoding state. The decoder must determine, among other things, whether a received packet refers to symbols having sequence numbers that are newer, older, or the same as recently identified symbols. Illustrative embodiments solve this problem by including a packet sequence number ("PSN") in the coding header of each coded packet and maintaining as part of the decoder state a decoder sequence number ("DSN"). In embodiments, the DSN may correspond to a largest PSN received by the decoder and may be compared against the most recently received PSN.

[0091] Thus, in newness decision process 93 the decoder determines whether the PSN of the received packet is larger than the DSN currently maintained by the decoder. This occurs when the received packet refers to a source symbol that was previously unknown to the decoder. If this is the case, then the method proceeds to an update process 94 in which the decoder sequence number is updated to reflect that the received packet had a higher sequence number.

[0092] The update process 94 performs two important tasks, which may be more fully appreciated with reference to Fig. 6A, which illustrates a received coefficient vector 13 having a coefficient index 104 and the coefficient matrix 103 with decoder index 106, where Wd = w e = 8. The coefficient index 104 is computed as (PSN mod We) and the decoder index 106 is computed as (DSN mod w e ). Shaded or cross- hatched boxes in Figs. 6, 6A, and 6B represent non-zero coefficients, and blank or unhatched boxes in these Figures represent zero coefficients.

[0093] Firstly, the update process 94 updates the DSN to equal the received PSN. The decoder must take this step because the received packet refers to a source symbol (having the received PSN) that was not previously known to the decoder. In Fig. 6A this is shown in updated coefficient matrix 108, which has moved the decoder index 106 two columns to the right to match the packet index 104.

[0094] However, increasing the DSN "shifts" the finite sliding window, which covers a range of sequence numbers that depends on the DSN. Thus, the coefficient matrix may now include rows whose coefficients refer to source symbols whose sequence numbers now lie outside the finite sliding window, so the second important task of the update process 94 is to evict from the coefficient matrix any rows of coefficients that refer to obsolete symbols (i.e., rows with a non-zero coefficient in a column associated with an obsolete source symbol). Deleting these rows can reduce the probability of decoding the new symbol. But this effect can be limited by performing a Gaussian elimination at every opportunity, as described below.

Gaussian elimination will reduce the number of non-zero entries, and by this the probability to evict the symbol.

[0095] To illustrate eviction, with reference again to Fig. 6A, the decoder state prior to receiving the new coefficient vector 13 was a coefficient matrix with three rows of coefficients, and a decoder index equal to 2 (e.g. DSN = 138). The first row includes coefficients for the symbols having all eight prior sequence numbers (e.g. sequence numbers 131 to 138, inclusive), so its corresponding coded symbol is a linear combination of all the respective source symbols. However, the second and third rows include non-zero coefficients only for the prior six symbols (e.g. sequence numbers 133 to 138, inclusive). Thus, after the decoder received the new coefficient vector 13 (included in a packet having PSN = 140), the DSN increased by two (e.g. to 140, moving the sliding window to sequence numbers 133 to 140, inclusive).

Since row 41 a depends on symbols with sequence numbers 131 and 132, which are now outside the sliding window, row 41 a is evicted from the coefficient matrix as indicated in the updated coefficient matrix 108. However, since rows 41 b and 41 c depend only on symbols with sequence numbers 133 to 138, they are not evicted. Once the decoder has updated its state as indicated, the method proceeds to Fig. 5A, which illustrates algebraic decoding.

[0096] Returning to newness decision process 93, if the PSN of the received packet is not larger than the DSN, then the method continues to sameness decision process 95, in which the decoder determines whether the PSN of the received coded packet is the same as the DSN. If so, then it is not necessary to update the DSN or the decoder index 15 and algebraic decoding may commence immediately, so the method proceeds to Fig. 5A as above. This situation is illustrated in Fig. 6.

[0097] If the PSN of the received packet is not even at least as large as the DSN currently known to the decoder, then the packet may refer to obsolete source symbols so the method proceeds to a staleness decision process 96, in which the decoder determines whether the received coefficient vector includes non-zero coefficients that refer to source symbols whose sequence numbers lie outside the current sliding window determined by the DSN. Thus, the staleness decision process 96 is like the eviction task of the update process 94, except that the received coefficient vector is not (yet) a row in the coefficient matrix.

[0098] If the received packet includes a coded symbol that is a linear combination of only symbols still in the sliding window, as indicated by the received coefficient vector, then the method may proceed to Fig. 5A as indicated. Otherwise, in discarding process 97 the decoder discards the received coded packet as if it were an erasure and returns to the receiving process 92 to await arrival of the next packet. This situation is illustrated in Fig. 6B. For completeness, note that while the newly received coefficient vector 1 10 has a PSN 47 that is less than DSN 1 12 (as shown by the packet index 47 being to the left of the decoder index 1 12), nevertheless the coefficient vector 1 10 has non-zero coefficients only for symbols whose sequence numbers are at most six less than its PSN, and those six sequence numbers are within the eight sequence numbers, no greater than the DSN, that define the sliding window. Therefore, the coefficient vector 1 10 is not discarded.

[0099] Referring now to Fig. 5A, an algebraic decoding process begins with an adding process 91 A, in which the coefficient vector of a received packet is added as a row to the coefficient matrix R to form a temporary coefficient matrix R', as indicated in Figs. 6, 6A, and 6B. The coefficient matrix R' with the new row of coefficients may be stored in a temporary memory to permit linear algebra to be performed without losing the state R of the decoder.

[0100] If the received packet is an uncoded packet, it may lack a coefficient vector. In such a situation, the uncoded symbol contained therein may be represented as a linear combination of (zero times each of the other w e -1 symbols) plus (1 times the uncoded packet index), with a corresponding coefficient vector. The adding process 91 A adds this coefficient vector to the coefficient matrix R to form the temporary coefficient matrix R'. This adding process 91A is performed even for uncoded packets, because their symbols may be used to decode other coded symbols (for example, as described in connection with Figs. 1 1 -14C). Moreover, as the received packet contained an uncoded symbol, the method may send a copy of the uncoded symbol immediately as indicated to delivery process 94A, discussed below.

[0101] The method continues with a Gaussian elimination (row reduction) process 92A, in which the temporary coefficient matrix R' undergoes row reduction with pivoting, as that technique is known in the art. However, unlike the prior art 120 which begins pivoting from the first column (as shown in Fig. 7), an illustrative embodiment 124 begins pivoting from the decoder index column 122 corresponding to the DSN modulo the window size w e (as shown in Fig. 7A). Thus, the result of the row reduction process 92A is a coefficient matrix in "shifted" row echelon form.

[0102] The method continues to a decoding determination process 93A, in which the decoder determines whether any new source symbols were decoded by the row reduction process 92A. The result is YES when, as shown in Fig. 7A, the row- reduced temporary coefficient matrix R' includes an m-by-m identity submatrix, shifted to the right of the decoder index column, that is larger than a corresponding shifted n-by-n identity submatrix in the original coefficient matrix R. In this case, the row reduction process 92A successfully decoded m-n source symbols in order of their sequence numbers, and the method continues to delivery process 94A.

Otherwise, the newly received coefficient vector did not decode additional source symbols in order, and the method continues to storing process 95A.

[0103] This algorithm provides a substantial efficiency gain over the prior art. The most likely scenario for an embodiment having a high code rate is that a newly received packet is uncoded and has a PSN that is exactly one larger than the current DSN. This entails that update process 94 will move the decoder index one column to the right and evict the first row of coefficients. As can be appreciated from Fig. 7A, performing these two actions preserves the "shifted" row echelon form of the coefficient matrix. Also, if the received packet is uncoded, then the uncoded symbol contained therein likely has the largest sequence number received by the decoder and may be delivered immediately. By contrast, performing Gaussian elimination in the traditional way would result in a coefficient matrix having an "unshifted" row echelon form, and such elimination would have to be performed anew after each newly received packet.

[0104] In the delivery process 94A the decoder delivers newly decoded source symbols to a symbol consumer. Such a consumer may be, for example, a software application, a symbol buffer, or some other device, system, or process. Finally, algebraic decoding concludes with a storing process 95A, in which the "shifted" row reduced coefficient matrix R' is stored as the coefficient matrix R.

[0105] Referring now to Fig. 8, an embodiment of these concepts is a decoder 160. The decoder 160 includes a finite window control module 162, a sequence number register 164, a coding buffer 166, a random linear network code ("RLNC") decoder 168, and a delivery module 170. The operation of these components is now described.

[0106] The finite window control module 162 is the principal mechanism for controlling the decoder 160. The finite window control module 162 receives uncoded and coded packets and maintains a decoder sequence number (DSN) in a decoder sequence number register 164. In various embodiments, it maintains the DSN according to the processes described above in connection with Fig. 5 and especially update process 94.

[0107] The finite window control module 162 stores received symbols, whether uncoded or coded, in a coding buffer 166. The coding buffer 166 contains Wd symbols, where Wd is the size of the decoder sliding window, and the finite window control module 162 evicts stale symbols as they pass outside the window. This may happen, for example, in accordance with the processes described above in connection with update process 94.

[0108] In some embodiments, when the decoder 160 is unable to decode a symbol despite using the techniques described herein, it may provide appropriate feedback to a respective encoder. When feedback is sent, it will refer to an undecoded symbol by sequence number.

[0109] The decoder 160 also includes an RLNC decoder 168 to compute source symbols from the coded symbols found in a coded packet, for example as described above in connection with Fig. 5A. For example, the Kodo library published by Steinwurf ApS of Denmark may be used for this purpose under the direction of the finite window control module 162.

[0110] Finally, the decoder 160 includes a delivery module 170 that performs output functions (i.e., buffering, delivery to a higher protocol layer, and so on). The delivery module 170 receives uncoded source symbols from the coding buffer 166, and decoded source symbols from the RLNC decoder 168. It should be appreciated that any or all the functional components in the decoder 160 may be implemented in hardware, firmware, software, or any combination of such technologies.

[0111] Various embodiments of the invention also perform recoding. Recoding is the operation of creating new coded packets from previously coded and uncoded "source" packets without necessarily decoding the coded packets first. This typically entails (1 ) applying the linear combination to the payload, (2) computing the new coded coefficients from the coefficients of the source packets, and (3) adding the new coded coefficients to the newly recoded packet. Recoding usually occurs at intermediate nodes to improve connection performance through (1 ) increasing the diversity of coded packets (i.e., number of source symbols that are part of the linear combination) or (2) injecting new coded packets to cover for losses in subsequent links. Recoding finds important applications in multipath communications and channel bonding (e.g. LTE + Wi-Fi), mesh networks, and opportunistic routing for serial links.

[0112] In general, a recoder should apply the same filter mechanisms for feedback and source packets. When combining multiple symbols with sparse coefficient vectors, the resulting coefficient vector contains more and more non-zero elements. If those non-zero elements exceed the window size, the packets become useless. To avoid that problem, recoders can resort to the following mechanisms: [0113] Recoded packets should be sparse, e.g. non-zero coefficients should be in an FEC window. This can be enforced by applying a rule prohibiting recoding among packets with sequence numbers that are too far apart (i.e., the difference exceeds the window size). The window size can be different from the source/destination window size and can be communicated by another node (e.g., source, destination, network management) or inferred by the recoder itself from source-destination traffic or some other available metric.

[0114] Recoders may adjust the rate of injected recoded packets into the stream (i.e., towards the destination) in response to the packet losses rate in the next hop or over the remainder of the network. The loss rate can be communicated by another node (e.g., source, destination, network management) or inferred by the recoder itself from source-destination traffic or some other available information such as Channel State Information (CSI).

[0115] In general, a recoder may base its decision to transmit additional packets and the way it mixes packets on received or inferred state information about the sliding window, the state of the transfer, as well as channel, link, or next-hop quality. A recoder may also consider decoding complexity when coding packets together, thus refraining from recoding or creating a packet that has reached a target number of native mixed packets, even if all symbols are still within the encoding sliding window. A recoder may inform its upstream neighbors of received/ missing/seen symbols/degrees of freedom, thus informing them of its received symbols as well as link quality, where seen symbols are source symbols yet to be decoded that are part of a received linear combination, and missing degrees of freedom represent the number of independent coded symbols that are required for the decoding of a given source symbol or generation.

[0116] A recoder may intercept and terminate a request from downstream nodes (e.g. destination) for additional packets or degrees of freedom if it is able to satisfy that request through the transmission of a recoded, coded, or source symbol.

Recoders may decode opportunistically and on the fly to make their stored symbols sparser. This is only possible if recoders receive sufficient packets. And recoders may allow 'easy' recoding without partial decoding by combining incoming subsequent packets while the FEC window remains constant or is marginally increased. Recoding may allow adapting to the next link's required coding rate, which may be different from previous link coding rates (both higher and lower).

[0117] Thus, referring now to Fig. 9, a recoding method in accordance with a recoding embodiment (such as the recoder shown in Fig. 1 ) includes decoding processes 130 to 142 illustrated in Fig. 5, decoding processes 144 to 152 illustrated in Fig. 5A, and encoding processes 154 to 158 illustrated in Fig. 3. In detail, the processes 130, 132, 134, 136, 138, 140, and 142 of Fig. 9 correspond respectively to the initialization process 91 , receiving process 92, sameness decision process 95, newness decision process 93, update process 94, staleness decision process 96, and discarding process 97 of Fig. 5. These processes 130 to 142 therefore collectively perform the decoding processes of Fig. 5, in the same order, with the non-functional difference of making the sameness decision 134 before 136 instead of after. The only functional difference is that after the discarding process 142, the method of Fig. 9 does not return to process 132 in direct correspondence with the method of Figs. 5 and 5A, but instead proceeds to encoding processes 154 to 158.

[0118] Also, the processes 144, 146, 148, 150, and 152 of Fig. 9 correspond respectively to the adding process 91 A, row reduction process 92A, decoding determination process 93A, delivery process 94A, and storing process 95A. These processes 144 to 152 therefore collectively perform the decoding processes of Fig. 5A, in the same order, with the only functional differences being that the

determination process 148 and delivery process 150 are performed optionally in some decode-and-forward embodiments (as indicated by their appearances in dashed outline), and that the storing process 152 does not return to process 132 in direct correspondence with the method of Figs. 5 and 5A but instead proceeds to encoding processes 154 to 158.

[0119] The processes 154 to 158 of Fig. 9 correspond respectively to the code rate checking process 40, coded packet generating process 42, and queueing process 44 of Fig. 3. These processes 154 to 158 therefore collectively perform the coded packet generation processes of Fig. 3 (but not the uncoded packet generation processes), in the same order, with no functional differences. [0120] Referring now to Fig. 10, an embodiment of the concepts described herein is a recoder 180 that includes a finite window control module 182, a sequence number register 184, a coding buffer 186, a random linear network code ("RLNC") recoder 188, and a delivery module 190. The operation of these components is now described.

[0121] The finite window control module 182 is the principal mechanism for controlling the recoder 180. The finite window control module 182 receives uncoded and coded packets and maintains a recoder sequence number (RSN) in a recoder sequence number register 184. In various embodiments, it maintains the RSN according to the processes described above in connection with Fig. 5 and especially update process 94 for updating a decoder sequence number (DSN).

[0122] The finite window control module 182 stores received symbols, whether uncoded or coded, in a coding buffer 186. The coding buffer 186 contains up to Wd symbols, where Wd is the size of the (partial) decoder sliding window, and the finite window control module 182 evicts stale symbols as they pass outside the window. This may happen, for example, in accordance with the processes described above in connection with update process 94.

[0123] The recoder 180 also includes an RLNC recoder 188 to row reduce the coded symbols found in respective coded packets (for example as described above in connection with Fig. 5A) and generate and forward recoded packets as described above in connection with Fig. 3. For example, the Kodo library published by

Steinwurf ApS of Denmark may be used for this purpose under the direction of the finite window control module 182.

[0124] In accordance with some embodiments, the RLNC recoder 188 does not perform partial decoding at all, but instead performs cumulative network encoding by transmitting coded packets whose successive coded symbols are linear

combinations of increasingly many input symbols. Thus, the RLNC recoder 188 may generate a new coded symbol Sn+i from a previous coded symbol Sn, a coefficient C, and a newly received input symbol S using the iterative formula Sn+i = Sn + C · S, where the addition and multiplication are performed in the finite field GF(q). In this way, the recoder 180 may increase or decrease the number of input symbols used to form a coded symbol; that is, the finite sliding window size w e . This change advantageously permits the recoder 180 to forward packets between data channels for which different channel characteristics require different code rates.

[0125] Finally, the recoder 180 includes a delivery module 190 that performs output functions (i.e., buffering, delivery to a higher protocol layer, and so on). The delivery module 190 receives recoded packets from the RLNC recoder 188 and forwards them to a downstream coder or node. It should be appreciated that any or all the functional components in the recoder 180 may be implemented in hardware, firmware, software, or any combination of such technologies.

[0126] Referring now to Figs. 1 1 , 1 1 A, and 1 1 B, a comparison is drawn between packet contents in the prior art techniques of plain systematic block codes (Fig. 1 1 ) and infinite sliding window (Fig. 1 1 A), and the finite sliding window technique described herein (Fig. 1 1 B). Each such figure indicates transmitted (coded) packets in rows by order of transmission. Each row indicates by dots which source symbols (or source packets) are encoded by linear combination in the respective coded packet. Moreover, each figure provides for the transmission of two uncoded symbols for each (redundant) coded symbol, and therefore illustrates a code having a code rate of 2/3.

[0127] The systematic block code of Fig. 1 1 shows three generations 190a, 190b, 190c of symbols. The first four transmitted packets are uncoded, and include source symbols numbered 1 , 2, 3, and 4 in order. The fifth and sixth packets are coded and contain linear combinations of these source symbols which may be used to recover the source symbols in the presence of at most two erasures from among any of the six transmitted packets. Such recovery is discussed below in connection with Fig. 12. Having transmitted the first generation of symbols numbered 1 to 4 in the first six packets, the next six packets transmit a second generation of symbols numbered 5 to 8, the next six packets transmit a third generation of symbols numbered 9 to 12, and so on.

[0128] The infinite sliding window code 68 of Fig. 1 1 A repeats a pattern of two uncoded packets followed by a packet containing a coded symbol that is a linear combination of all (or some dynamic subset of all) preceding source symbols. Thus, the first two transmitted packets are uncoded and include source symbols numbered 1 and 2. The third packet is coded and contains a linear combination of all preceding source symbols, numbered 1 to 2. The next two packets are uncoded and include source symbols numbered 3 and 4. The sixth packet is coded and contains a linear combination of all preceding source symbols, numbered 1 to 4, and so on.

[0129] Referring now to Fig. 1 1 B, the basic concept of the Sliding Window approach is described. We don't consider the feedback or multipath yet, and only consider a single link. The FEC window size is omitted, as FEC is more useful when packets are sent over different paths and reordering is expected, or when feedback is available. Thus, coding occurs over the whole generation.

[0130] In accordance with embodiments of the concepts and techniques described herein, the finite sliding window code of Fig. 1 1 B repeats a pattern of two uncoded packets followed by a packet containing a coded symbol that is a linear combination of at most a fixed window number w e of preceding source symbols. The finite sliding window code of Fig. 1 1 B combines the advantageous simplicity and storage bounds of systematic block codes lacking in infinite sliding window codes, with the advantageous reduced decoding latency of infinite sliding window codes with respect to systematic block codes.

[0131] Fig. 12 illustrates source symbol decoding using a prior art selective- automatic-repeat-request (S-ARQ) technique with a systematic block code having generations as in Fig. 1 1 . In this exemplary figure, 24 packets were transmitted by an encoder, but only 15 packets were received by a decoder due to packet erasures, indicated by rows whose dots are struck through. Source symbols are provided in generations of three.

[0132] Fig. 12 may be understood as follows. Uncoded packet 0 containing symbol 1 was received by the decoder, permitting delivery in order of symbol 1 to the local coder for further processing (e.g. at a higher protocol layer). Uncoded packet 1 containing symbol 2 was received, permitting delivery in order of symbol 2. Uncoded packet 2 containing symbol 3 was erased, as was coded packet 3 containing a coded symbol. Thus, source symbol 3 could not be delivered in order. [0133] In the second generation, uncoded packet 4 containing symbol 4 was received, but symbol 4 could not be delivered in order because symbol 3 was pending delivery. Likewise, uncoded packet 5 (symbol 5) was erased, uncoded packet 6 (symbol 6) was received, and coded packet 7 was received, permitting recovery of the erased symbol 5.

[0134] However, none of the decoded symbols 1 10 could be delivered in order because symbol 3 was still pending delivery. Therefore, the decoder requested at the first feedback juncture (indicated by the horizontal line after packet 7) that the encoder transmit a new, linearly-independent, coded symbol for the first generation.

[0135] For the third generation, the encoder transmitted this coded symbol in coded packet 8, but this packet was also erased. The encoder proceeded to transmit uncoded packets 9, 10, and 1 1 (source symbols 7, 8, and 9; all erased), and coded packet 12 that was received. As coded packet 8 was erased, at the second feedback juncture the decoder again requested transmission of a coded symbol for the first generation.

[0136] For the fourth generation, the decoder finally received its needed coded symbol in coded packet 13. Receipt of coded packet 13 permitted recovery (and delivery) of symbol 3. As source symbols 4, 5, and 6 were already decoded, they were also then delivered. The decoder then received uncoded packets 14, 15, and 16 respectively containing symbols 10, 1 1 , and 12. However, these symbols could not be delivered, pending delivery of erased symbols 7, 8, and 9. Since the decoder obtained one coded symbol in packet 12 out of a generation of three, it requested 3- 1 =2 more linearly independent coded symbols for the third generation.

[0137] For the fifth generation, the decoder received its needed coded symbols for generation three in coded packets 18 and 19. Given the three linearly- independent coded symbols of packets 12, 18, and 19, Gaussian elimination (as discussed below) permitted the decoder to recover the three source symbols 7, 8, and 9, which were immediately delivered with previously obtained symbols 10, 1 1 , and 12. Finally, the decoder was able to immediately deliver source symbols 13, 14, and 15 after receiving corresponding uncoded packets 20, 21 , and 22. The erasure of coded packet 23 did not affect delivery of these symbols. [0138] Fig. 13 illustrates source symbol decoding using a modified S-ARQ technique in accordance with an embodiment of the finite sliding window concepts described herein. The decoder processes the packet sequence using both the encoder's sliding window and a smaller sliding FEC window contained therein, as shown in Fig. 2. In the exemplary packet sequence of Fig. 13, an FEC window of six symbols (i.e. two generations) is used to provide forward error correction. The FEC window design shown in Fig. 1 allows for feedback to be received by the encoder while the sliding window still allows for retransmissions because the requested packets are still in the encoder's sliding window.

[0139] A decoder processing the packet sequence of Fig. 13 receives uncoded packets 0, 1 , 4, and 6, and coded packet 7 (containing a linear combination of symbols 1 to 6), while packets 2, 3, and 5 were erased. Thus, source symbols 1 and 2 were delivered, pending symbols 3 and 5 were unknown, and pending symbols 4 and 6 were known but undeliverable. At the first feedback juncture after the second generation, the decoder had five packets for the six symbols in the first two generations. It thus requested transmission of 6-5= 1 additional packet containing a linear combination of the unknown symbols from the first generation (i.e. the source symbol 3).

[0140] For the third generation, uncoded packet 8 containing symbol 3 was erased, as were uncoded packets 9, 10, and 1 1 , while packet 12 containing a linear combination of symbols 4 to 9 was received. Thus, pending symbols 3, 5, 7, 8, and 9 were unknown and pending symbols 4 and 6 remained known but undeliverable. As the decoder still had only five packets for the six symbols in the first two generations, it requested transmission of 6-5=1 additional packet containing a linear combination of the unknown symbols from the first two generations (i.e. the symbols 3 and 5).

[0141] For the fourth generation, the decoder received coded packet 13.

Combined with received packets 0, 1 , 4, 6, and 7, coded packet 13 made six packets (wd = 6) associated with six source symbols (w e = 6). Since Wd≥ w e , according to basic linear algebra the linear combinations in these packets yielded a system of equations that could be solved to recover the source symbols 1 to 6. Thus, the unknown pending symbols numbered 3 and 5 were recovered and delivered with symbols numbered 4 and 6. Pending symbols 7, 8, and 9 were unknown and pending symbols 10, 1 1 , and 12 were known but undeliverable.

[0142] After receiving uncoded packets 14 to 16 but not coded packet 17, the decoder had four packets (numbered 12, 14, 15, and 16) for the six symbols in generations three and four. It therefore requested transmission of 6-4=2 additional packets containing (linearly independent) linear combinations of the unknown symbols from these generations (i.e. the symbols 7, 8, and 9). Note that coded packet 12 counted toward generation three, because its coded symbol spanned generations two and three but all source symbols in generation two were known and could be subtracted from the linear combination.

[0143] For the fifth generation, the decoder received coded packets 18 and 19, rounding out the six packets needed to decode the six source symbols in

generations three and four. It thereby recovered unknown symbols 7, 8, and 9, which it delivered with known symbols 10, 1 1 , and 12. Finally, it delivered source symbols 13, 14, and 15 immediately upon receiving the respective uncoded packets 20, 21 , and 22.

[0144] Note that the embodiment of Fig. 13 is generational, whereas the embodiments of Figs. 6 and 7 may be generation-less, demonstrating that the finite sliding window techniques described herein may be applied in either case. Also note that after each generation, feedback is provided regarding only symbols that are part of the finite sliding window. For example, packet 12 is a linear combination of source symbols 4-9, the six last symbols of the sliding window (that is, the symbols inside the FEC window). Upon receiving feedback, the encoder sends packet 13, a linear combination of missing packets 3 and 5. In this case, although symbol 3 is outside the FEC window, it is still within the encoder's sliding window, hence it can be included in packet 13.

[0145] Embodiments according to Fig. 13 have differences to and advantages over the prior art erasure coding of Fig. 12. The packet sequence of Fig. 13 erases the same numbered packets as the sequence of Fig. 12, but the contents of each packet are different. Note also that Figs. 12 and 13 both indicate delivery of source symbol 1 after packet 0, symbol 2 after packet 1 , symbols 3 to 6 after packet 13, and so on, and the decoders of Figs. 12 and 13 both have similar complexity and storage bounds. However, advantageously the decoder of Fig. 13 executes faster because it has the decoding delay of a sliding window code rather than a systematic block code.

[0146] Figs. 14, 14A, 14B, and 14C show four different applications of the feedback mechanism with the finite sliding window concepts described herein, where the decoding window size is Wd = 5 and the decoding coefficient matrix contains at least two received coefficient vectors (1 ,a,b,0,0) and (0,0,0, 1 ,c). In each figure, the five source symbols indicated by column labels a, b, c, d, and e correspond to symbols carried by packets pi , p 2 , p3, p 4 , and ps, respectively, and the horizontal line indicates a feedback request, as in Figs. 12 and 13. Figs. 14A and 14C illustrate applications for which lower latency and more flexibility, in the form of

throughput/efficiency vs. delay tradeoffs, are expected.

[0147] In Fig. 14, ARQ is applied at a higher layer. Since none of the source symbols were delivered to the ARQ layer, the decoder feedback requests all five source packets, hence the retransmission of pi , p 2 , p3, p 4 , and ps. Figs. 14A, 14B, and 14C show more efficient combinations of feedback with the finite sliding window concepts described herein, where the feedback is integrated with the coding layer.

[0148] In Fig. 14A, instead of requesting all source symbols, the decoder feedback process requests three fully encoded packets (i.e., three packets where all symbols in the sliding window are combined). The three received packets enable the decoding of all source symbols upon reception of the third coded packet, because the decoder has five packets that contain five linearly independent combinations of these five source symbols.

[0149] Fig. 14B shows a configuration where retransmitted packets are systematic (i.e., uncoded). Unlike Fig. 14, the retransmitted packets in Fig. 14B can be used by the decoder. Hence, the decoder can deliver p3 along with pi after decoding its first stored packet (1 ,a,b,0,0). However, redundant packets are delivered after the third packet is received. [0150] Fig. 14C shows an optimal configuration where the decoder feedback requests systematic (i.e., uncoded) packets but is aware of the packets present in the decoder coefficient matrix. Hence, only packets pi , p 2 , and p 4 are requested, allowing for delivery of all packets by the third received retransmission. While both the schemes in Figs. 14A and 14B can deliver all packets upon the third received retransmission, the scheme in Fig. 14C performs better as it delivers some of the packets earlier (e.g., pi , p 2 , and p3) and requires fewer decoding operations due to the systematic nature of the retransmissions.

[0151] Thus, it may be appreciated that the finite window concept described herein allows for a systematic code which inserts redundant or coded packets from time to time, enabling minimal delay. Especially in the feedback case, the decoder may request packets increasing the probability of decoding if the encoder would send them systematically. Of course, the encoder can additionally send redundant or coded packets, as depicted in Fig. 14A.

[0152] The techniques and structures described herein may be implemented in any of a variety of different forms. For example, features of the concepts, systems, and techniques described herein may be embodied within various forms of communication devices, both wired and wireless; television sets; set top boxes; audio/video devices; laptop, palmtop, desktop, and tablet computers with or without wireless capability; personal digital assistants (PDAs); telephones; pagers; satellite communicators; cameras having communication capability; network interface cards (NICs) and other network interface structures; base stations; access points;

integrated circuits; as instructions and/or data structures stored on machine readable media; and/or in other formats. Examples of different types of machine readable media that may be used include floppy diskettes, hard disks, optical disks, compact disc read only memories (CD-ROMs), digital video disks (DVDs), Blu-ray disks, magneto-optical disks, read only memories (ROMs), random access memories (RAMs), erasable programmable ROMs (EPROMs), electrically erasable

programmable ROMs (EEPROMs), magnetic or optical cards, flash memory, and/or other types of media suitable for storing electronic instructions or data.

[0153] In the foregoing detailed description, various features of the concepts, systems, and techniques described herein are grouped together in one or more individual embodiments for streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed concepts, systems, and techniques described herein requires more features than are expressly recited in each claim. Rather, inventive aspects may lie in less than all features of each disclosed embodiment.

[0154] Having described implementations which serve to illustrate various concepts, structures, and techniques which are the subject of this disclosure, it will now become apparent to those of ordinary skill in the art that other implementations incorporating these concepts, structures, and techniques may be used. Accordingly, it is submitted that that scope of the patent should not be limited to the described implementations but rather should be limited only by the spirit and scope of the following claims.