Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
TELECOMMUNICATIONS METHOD
Document Type and Number:
WIPO Patent Application WO/2020/165593
Kind Code:
A1
Abstract:
There is provided a method of telecommunications comprising the following steps: receiving (13) an encoded data block having a plurality of values; dividing (15) the received encoded data block into a plurality of received segments, each received segment comprising at least two of the values; decoding (17) each received segment by providing, for each received segment, a plurality of estimated encoded sequences, each estimated encoded sequence comprising at least two data units; merging (19) estimated encoded sequences for consecutive segments to provide a plurality of candidate sequences; and selecting (23) one of the plurality of candidate sequences by performing (21) a closest fit calculation between the received encoded data block and each of the candidate sequences. The method is suitable for use in software-defined radios.

Inventors:
WANG JINFEI (GB)
DODGSON TERENCE (GB)
MA YI (GB)
XUE SONGYAN (GB)
Application Number:
PCT/GB2020/050339
Publication Date:
August 20, 2020
Filing Date:
February 13, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AIRBUS DEFENCE & SPACE LTD (GB)
International Classes:
H04L1/00; H03M13/00; H03M13/23; H03M13/39
Other References:
LIN H-D ET AL: "Algorithms and architectures for concurrent Viterbi decoding", 19890611; 19890611 - 19890614, 11 June 1989 (1989-06-11), pages 836 - 840, XP010081184
CHENGYI SU ET AL: "A block-based parallel decoding architecture for convolutional codes", COMMUNICATIONS AND NETWORKING IN CHINA (CHINACOM), 2010 5TH INTERNATIONAL ICST CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 25 August 2010 (2010-08-25), pages 1 - 4, XP031847074, ISBN: 9739639799974
WANG JINFEI ET AL: "Parallel Decoding for Non-recursive Convolutional Codes and Its Enhancement Through Artificial Neural Networks", 2019 IEEE 20TH INTERNATIONAL WORKSHOP ON SIGNAL PROCESSING ADVANCES IN WIRELESS COMMUNICATIONS (SPAWC), IEEE, 2 July 2019 (2019-07-02), pages 1 - 5, XP033605862, DOI: 10.1109/SPAWC.2019.8815549
Attorney, Agent or Firm:
ABEL & IMRAY (GB)
Download PDF:
Claims:
Claims

1. A method of telecommunications, the method

comprising the following steps:

• receiving an encoded data block having a plurality of values;

• dividing the received encoded data block into a plurality of received segments, each received segment comprising at least two of the values;

• decoding each received segment by

providing, for each received segment, a plurality of estimated encoded sequences, each estimated encoded sequence comprising at least two data units;

• merging estimated encoded sequences for consecutive segments to provide a

plurality of candidate sequences;

• selecting one of the plurality of

candidate sequences by performing a closest fit calculation between the received encoded data block and each of the candidate sequences.

2. A method according to claim 1 wherein the step of providing a plurality of estimated encoded data sequences comprises:

• providing, for each received segment, a plurality of estimated input sequence segments, each estimated input sequence segment comprising at least two data units, and calculating the plurality of estimated encoded data sequences by applying a coding model to the plurality of estimated input sequence segments.

3. A method according to claim 1 or 2 wherein decoding each received segment comprises decoding each received segment in parallel.

4. A method according to claim 1 or 2 wherein decoding each received segment comprises decoding each received segment in series.

5. A method according to any preceding claim wherein providing, for each received segment, a plurality of estimated encoded sequences, comprises for each received segment, performing a closest fit

calculation .

6. A method according to any preceding claim wherein merging comprises for each received segment in turn: merging estimated encoded sequences for the received segment with estimated encoded data sequences for a consecutive received segment to provide a plurality of merged sequences, and then performing a closest fit calculation between the merged sequences and the corresponding received segments combined, and then merging the merged sequences with estimated encoded sequences for a further subsequent received segment, until a plurality of candidate sequences is provided each having a length equal to the length of the received encoded data block.

7. A method according to any preceding claim wherein at least some of the steps of the method are performed using an artificial neural network. 8. A method according to any preceding claim when dependent on claim 2 wherein merging comprises:

for each estimated encoded data sequence:

(i) identifying an initial residual data unit combination representing the data units that would be residual in a coder memory when the first data unit of the

corresponding estimated input sequence segment enters the coder, and

(ii) identifying a final residual bit combination representing the data units that would be residual in the coder memory when the first data unit of the subsequent estimated input sequence segment enters the coder, and

(iii) matching the final residual bit combinations corresponding to the estimated encoded data sequences for one received segment, to the initial residual bit combinations corresponding to the estimated encoded data sequences for the subsequent received segment.

9. A method according to any preceding claim further comprising the steps of:

• encoding a data block having a plurality of bits, using a convolutional coder to provide the encoded data block;

• transmitting the encoded data block. 10. A computer program configured to execute the steps of the method according to any of claims 1-9.

11. A software-defined radio comprising:

an antenna;

an analogue to digital converter;

a processor;

a memory;

wherein the processor is configured to run the computer program of claim 10.

Description:
Telecommunications method

Field of the Invention

The present invention concerns a telecommunications method. In particular, but not exclusively, the

invention concerns such a method for reducing errors in telecommunications, for example for use in software- defined radios.

Background of the Invention

Software-defined radios are capable of receiving and transmitting a wide variety of different waveforms (i.e. radio protocols), since they may be programmed to operate in a number of different configurations according to the particular functions required. The versatile

functionality provided by software-defined radios is useful in a number of applications, including in military operations encompassing numerous communications schemes, and in mobile phone communications where there are frequent upgrades. Ideally, software-defined radio systems would be capable of processing any type of waveform through simply adapting the software.

There is a desire to use channel coding (also known as forward-error correction) techniques in software- defined radio communications to reduce errors in

transmission. Channel coding may be particularly useful in situations where communication channels are unreliable or noisy, and/or where re-transmission would be expensive or impractical. Channel coding typically uses an error- correction code (ECC) to encode the data to be

transmitted. During encoding, redundant data is

introduced which enables the receiver to correct errors so that re-transmission of data is not required. This is at the expense of higher forward channel bandwidth. A common type of error-correction code is a convolutional code. Typically, in the prior art, a Viterbi algorithm is used to decode convolutional codes.

Viterbi decoding is a form of soft decoding which determines the most likely path (the "Viterbi path") to the observed output; however, Viterbi decoding is relatively slow, consumes significant memory, and has not to date provided the desired flexibility in processing different waveform types. An improved decoding scheme for convolutionally encoded data is required.

Summary of the Invention

In a first aspect, the invention provides a method of telecommunications, the method comprising the

following steps: receiving an encoded data block having a plurality of values; dividing the received encoded data block into a plurality of received segments, each received segment comprising at least two of the values; decoding each received segment by providing, for each received segment, a plurality of estimated encoded sequences, each estimated encoded sequence comprising at least two data units; merging estimated encoded sequences for consecutive segments to provide a plurality of candidate sequences; selecting one of the plurality of candidate sequences by performing a closest fit

calculation between the received encoded data block and each of the candidate sequences.

The hierarchy of channel codecs may be divided into three groups: block codecs, simple convolutional codecs and complex, multi coder, convolutional codecs. The present invention relates to codecs which fall into the second category of simple convolutional codecs.

The method may be a computer-implemented method.

For example, the method may be undertaken on a processor of a software-defined radio. Receiving may comprise receiving on an antenna. Receiving may comprise

receiving through a wired connection. There may be tens of values. There may be hundreds of values. There may be thousands, hundreds of thousands, or more than a million values. The received encoded data block may contain errors. The number of received segments may be selected to be a power of two. Providing, for each received segment, a plurality of estimated encoded sequences may comprise calculating, for each received segment, a plurality of estimated encoded sequences.

There may be more than one way of undertaking the calculating, for example, by generating look-up tables or by using a neural network. The data units may comprise bits, for example binary bits having value 0 or 1.

Merging may comprise joining end-to-end. The closest-fit calculation may include minimisation of a metric, for example Euclidian distance.

The inventors have recognised that, by dividing the received data block into segments, the received segments may be decoded simultaneously, in parallel, thus

providing a considerable increase in processing speed over decoding methods of the prior art. The decoded segments may also be merged in parallel providing a further reduction in latency.

In a further advantage, the inventors have

recognised that segmenting the received data block provides for the possibility of processing with an artificial neural network, which has not previously been realisable due to incompatibility between Viterbi decoding schemes and artificial neural networks.

For example, the step of decoding each received segment by providing, for each received segment, a plurality of estimated encoded sequences may be

undertaken using an artificial neural network. Training may be provided for example, by supplying the neural network with noisy received segments together with the coder configuration and the known corresponding input bit sequences. After sufficient training, the neural network should then be able to correctly identify for each received segment, the most likely input bit combination and corresponding estimated encoded data sequence. A typical (but not necessarily exclusive) architecture for the artificial neural network could be an Adaptive Neural Network based on the Multi Layer Perceptron architecture which uses forward propagation to generate outputs that are then refined using back propagation through the network. Back propagation would typically involve a Stochastic Gradient Descent process, followed by an Adams optimization process.

Using an artificial neural network in this manner may significantly increase processing speed (i.e.

latency) compared to decoding techniques of the prior art. For example, the processing speed may be of the order of 20 times, or more, faster.

In a further advantage, the method of the invention lends itself to codec update, and may be more readily deployed in a software-defined radio than decoding schemes of the prior art. When deployed in a software- defined radio, use of an artificial neural network to perform the present invention may enable easy adaptation to multiple different waveforms, through updating of the neural network weights. When the present invention is deployed in a

software-defined radio, the radio may be more easily, cheaply and quickly adapted to individual user needs or altered remedially in response to external events. The present technique provides a radical improvement in the ease and speed of tailoring and remedial alteration.

The step of providing a plurality of estimated encoded data sequences may comprise providing, for each received segment, a plurality of estimated input sequence segments, each estimated input sequence segment

comprising at least two data units, and calculating the plurality of estimated encoded data sequences by applying a coding model to the plurality of estimated input sequence segments.

The coding model may be a convolutional coder, for example a simple convolutional coder. The convolutional coder may comprise a shift register with length three and two summations arms. Other convolutional coders fall within the scope of the invention, for example having a different length of shift register, a different number of summation arms and/or a different number of tap-off points .

Decoding each received segment may comprise decoding each received segment in parallel. Decoding in parallel may be undertaken for example through a method involving generation of look-up tables or through use of a neural network .

Decoding each received segment may alternatively comprise decoding each received segment in series.

Providing, for each received segment, a plurality of estimated encoded sequences, may comprise for each received segment, performing a closest fit calculation. The closest fit calculation may be between the estimated encoded sequences or sequences related to the estimated encoded sequences (for example converted from the estimated encoded sequences), and the received segment.

The closest fit calculation may include a

minimisation of a metric for example a Euclidian

distance .

Merging may comprise for each received segment in turn: merging estimated encoded sequences for the received segment with estimated encoded sequences for a consecutive received segment to provide a plurality of merged sequences, and then performing a closest fit calculation between the merged sequences and the

corresponding received segments combined, and then merging the merged sequences with estimated encoded sequences for a further subsequent received segment, until a plurality of candidate sequences is provided each having a length equal to the length of the received encoded data block. The closest fit calculation may include a minimisation of a metric for example a

Euclidian distance.

At least some of the steps of the method may be performed using an artificial neural network. For example, the step of decoding each received segment by providing, for each received segment, a plurality of estimated encoded sequences, each estimated encoded sequence comprising at least two data units, may be performed using an artificial neural network. Use of an artificial neural network may provide a considerable increase in processing speed.

Merging may comprise: for each estimated encoded data sequence: (i) identifying an initial residual data unit combination representing the data units that would be residual in a coder memory when the first data unit of the corresponding estimated input sequence segment enters the coder, and (ii) identifying a final residual bit combination representing the data units that would be residual in the coder memory when the first data unit of the subsequent estimated input sequence segment enters the coder, and (iii) matching the final residual bit combinations corresponding to the estimated encoded data sequences for one received segment, to the initial residual bit combinations corresponding to the estimated encoded data sequences for the subsequent received segment .

The method may further comprise the steps of:

encoding a data block having a plurality of bits, using a convolutional coder to provide the encoded data block; and transmitting the encoded data block.

In a second aspect, the invention provides a computer program configured to execute the steps of the method of the first aspect.

In a third aspect, the invention provides a

software-defined radio comprising: an antenna; an analogue to digital converter; a processor; a memory; wherein the processor is configured to run the computer program of the third aspect.

Description of the Drawings

An embodiment of the invention will now be described by way of example only with reference to the accompanying schematic drawings of which:

Figure 1 is a schematic diagram of a software-defined radio in accordance with the example embodiment of the invention;

Figure 2 is a flow diagram of the steps of the method of the example embodiment of the invention; Figure 3 is a schematic diagram of a convolutional coder in accordance with the example embodiment of the invention;

Figure 4 is a logic table in accordance with the coder of Figure 3;

Figure 5 shows an initial data block, output encoded data block and converted encoded data block in accordance with the coder of Figure 3;

Figure 6 shows a received encoded data block in

accordance with the coder of Figure 3;

Figure 7 shows segments of the received encoded data block of Figure 6;

Figure 8 shows segments of the initial data block of

Figure 5;

Figure 9 is a schematic diagram of the convolutional coder of Figure 3, at start of processing time;

Figure 10 is a flow diagram showing the construction of look-up tables for the first received segment of Figure 7;

Figure 11 is a schematic diagram of the coder of Figure

3, when the first bit of the second initial segment enters the shift register;

Figure 12 shows the look-up tables for the first

received segment;

Figure 13 shows the possible "starting residual bits' and possible segment bit combinations for the second received segment;

Figures 14-17 are flow diagrams showing the

construction of look-up tables for the second received segment;

Figure 18 shows the look-up tables for the second

received segment; Figure 19 is a flow diagram showing the merging of decoded first and second segments; and Figure 20 is a flow diagram showing selection of the merged segments of Figure 19, to be merged with the third decoded segment.

Detailed Description

In general, and as is known in the prior art, a software-defined radio 1, 1 (Figure 1) for transmitting and receiving encoded data will comprise an antenna 3,

3' , radio frequency front-end hardware 5, 5' including analogue-to-digital and digital-to-analogue converters, a central processing unit 7, 7' and memory 9, 9' . A software-defined radio 1 for transmitting signals could be configured differently to a software-defined radio 1' for receiving signals; however, in the example embodiment of the invention, the software-defined radio 1, 1' is configured as a transceiver, containing both transmitter and receiver parts. For ease of illustration, a

transmitting 1 and a receiving 1 radio are shown separately with only the relevant transmitting or receiving functional components shown and the others omitted. In the example embodiment of the invention, the radio 1, 1 also comprises coding hardware 11 in the form of a coder circuit, although in another example

embodiment, the coding could be undertaken in software.

In the example embodiment of the invention, decoding is undertaken in software, running on the central processing unit 7' . In the example embodiment of the invention, the radio is a hand-held radio, but could alternatively be a personal computer or other form of radio.

A telecommunications method for decoding a received encoded data block in an example embodiment of the invention comprises five main steps (Figure 2) . In a first step 13, the encoded data block is received by the antenna 3' . The received signal is then processed (for example, amplified) as is known in the art by the radio frequency front-end hardware 5' , and sampled by the analogue-to-digital converter (step not shown) . There may be a further step of processing the digital signal using software applications running on the central processing unit 7', as is also known in the art. In steps two to six, the digitised received signal is decoded by the central processing unit 7', running software applications and accessing the memory 9' . In a second step 15, the received encoded data block is divided into a plurality of segments. In a third step 17, each segment is decoded so as to provide a plurality of estimated encoded sequences. In a fourth step 19, the estimated encoded sequences are merged to provide candidate sequences. In a fifth step 21, a closest-fit calculation is performed between the received encoded data block and the candidate sequences. The candidate sequence which is the closest-fit is then taken to be the transmitted data sequence. In a sixth step 23, the initial un-encoded data block is then determined, since there is a known correspondence between the estimated encoded sequences and the bits of the initial un-encoded data block (i.e. the coder model is known) . In the example embodiment of the invention, the data block has been encoded using a convolutional coder circuit 101 which forms part of coding hardware 11.

Simple convolutional coders (i.e. using only one shift register) may be characterised by three items: (i) the constraint length, (ii) the coding rate (ratio of input to output bits of the coder, which is often the ratio of the input data rate to output data rate) and (iii) each of the output coder arm's modulo-2 summation connections to the shift register (for example expressed in polynomial or octal form) .

In the example embodiment of the invention, a simple convolutional coder 101 with two binary summation arms has coding rate 1/2 and shift register length three (i.e. three memory registers) (Figure 3) .

The coder 101 has a shift register 103 comprising first 105, second 107 and third 109 flip-flops, and has two summation arms: an upper summation arm 111 and a lower summation arm 113. Ingoing bits are shifted sequentially through the first flip-flop 105, through the second flip-flop 107, through the third flip-flop 109 and out of the shift register 103. Since the coding rate of a convolutional coder is known to be n/k, the coder 101 of the example embodiment of the invention has a coding rate equal to 1/2. For every one bit that enters the coder 101, two encoded bits are output. Since the constraint length of a convolutional coder is known to be n(K-l), the coder 101 of the example embodiment of the invention has constraint length equal to two, that is, the number of bits in the coder memory affecting

generation of each two output bits is two.

Each summation arm 111, 113 comprises a modulo-two adder 115 having an exclusive OR (XOR) gate. The upper summation arm 111 draws data from three tap-off points 117a, 117b, 117c and performs a modulo-two summing calculation using a standard XOR gate logic table 123 (Figure 4) . The lower summation arm 113 draws data from two tap-off points 119a, 119b (returning to Figure 3) and performs a modulo-two summing calculation using the same logic table 123.

Ingoing bits having value 0 or 1 are fed into the coder 101 according to a clock 121. On each increment of the clock 121: the first tap-off point 117a of the upper summation arm 111 reads the value of the first flip-flop 105 (i.e. it reads the value of the bit input to the first flip-flop 105); and at the same time the second tap-off point 117b of the upper summation arm 111 reads the value of the second flip-flop 107 (i.e. it reads the value of the bit output from the first flip-flop 105 and input to the second flip-flop 107); and at the same time the third tap-off point 117c of the upper summation arm 111 reads the value of the third flip-flop 109 (i.e. it reads the value of the bit output from the second flip- flop 107 and input to the third flip-flop 109) . The modulo-two adder 115 of the upper summation arm 111 sums the three read-out values. As a person skilled in the art will appreciate, in such modulo-two summation of three read-out values, the bit values output from the first 105 and second 107 flip-flops are summed first, and the resultant bit of that summation is summed with the value output from the third flip-flop 109, to provide an upper output bit having value 0 or 1. Upon each

increment of the clock 121, the upper output bit is output by the upper summation arm 111.

At the same time, on each increment of the clock 121: the first tap-off point 119a of the lower summation arm 113 reads the value of the first flip-flop 105; and the second tap-off point 119b of the lower summation arm 113 reads the value of the third flip-flop 109. The modulo-two adder 115 of the lower summation arm 113 sums the two read-out values. The values output from the first 105 and third 109 flip-flops are summed to provide a lower output bit having value 0 or 1. Upon each increment of the clock 121, the lower output bit value is output by the lower summation arm 113. As is known in convolutional coders of the prior art, the upper output bits and lower output bits are output from the coder 101 alternately, with each bit output from the upper summation arm 111 read out first and each bit output from the lower summation arm 113 read out second (in an alternative embodiment, the order may be reversed) .

In the example embodiment of the invention, an initial data block 125 has 16 bits each having value 0 or 1 (Figure 5) . When encoded, the corresponding output, encoded, data block 127 has twice as many bits as the initial data block 125 (i.e. 32 bits) . In the example embodiment of the invention, the output, encoded, data block 127 is directed through a "None Return to Zero" circuit 129, which modulates the bit stream in a standard way, to convert it to "None Return to Zero" format. The "None Return to Zero" circuit 129 outputs a converted, encoded, data block 131 comprising 32 values equal to 1 or -1. In an alternative embodiment, there may be no conversion to "None Return to Zero" format, or conversion to an alternative format.

The converted, encoded, data block 131 is then transmitted in a noisy channel, and received by a receiver (not shown) . Due to the noise, the received, encoded, data block values 133 (Figure 6) do not match precisely the values prior to transmission (i.e. the converted, encoded, data block values 131) . The

received, encoded, data block values 131 must be decoded to determine the initial data block values 125.

In the example embodiment of the invention, a method of decoding the received, encoded, data block values 133 to determine the initial data block values 125 comprises five steps: Step 1 - Segment formation

In the first step of the method, the received, encoded, data block is divided into a number of segments 135a, 135b, 135c, 135d for decoding (Figure 7) .

For ease of illustration, in the example embodiment of the invention, the number of values in the received, encoded, data block 133 has been chosen to be a power of two (i.e. 32 = 2 5 ) . In other example embodiments of the invention, there may be many more bits in the received, encoded, data block, for example, of the order of 100,000 or 1,000,000 bits.

Also for ease of illustration, in the example embodiment of the invention, the number of segments in the received, encoded, data block 133 has been chosen to be a power of two and related to the square root of the received, encoded, data block size by the following formula :

FLOOR: Rounding the value in parentheses down to the nearest integer

N y : the number of values in the received, encoded, data block

133

Thus, in the example embodiment of the invention, where N y = 32 :

segments 4 Accordingly, the received, encoded data block 133 is split into four received segments 135a, 135b, 135c, 135d, each containing eight values. In an alternative

embodiment of the invention, the number of values in the received, encoded, data block 133 may be a number that is not a power of two, and indeed, the number of segments may be a number that is not a power of two. For example, the received, encoded, data block 133 could feasibly contain 18 values and be split into three segments of six values. In that case, the initial data block of 9 bits could for example, be split into three segments of three bits .

Once the number of segments is determined, the initial data block 125 may also be represented in the same number of segments, but with the segments being half the size of the received segments 135a, 135b, 135c, 135d (i.e. the segments of the received, encoded, data block 133) . Since the coding rate is known and the size of the received, encoded, data block 133 is known, it is also known that the initial data block 125 must contain 16 bits. Thus for ease of illustration, in the example embodiment of the invention, the initial data block 125 is taken to contain four initial segments 137a, 137b, 137c, 137d each containing four values, wherein those values are to be determined (Figure 8) .

Step 2 - Segment decoding

Each received segment 135a, 135b, 135c, 135d is decoded in parallel (in another example embodiment, each received segment is decoded in series) . In the example embodiment of the invention, the decoding uses "maximum likelihood" decoding to determine, for each received segment 135a, 135b, 135c, 135d, a set of estimated encoded sequences containing the most likely values for the corresponding initial segment 137a, 137b, 137c, 137d (i.e. each segment of the initial data block 125) .

Whilst "maximum likelihood" decoding per se is known, its implementation in the present method of the invention has not previously been recognised.

To perform the "maximum likelihood" decoding, look up tables are constructed for each received segment 135a, 135b, 135c, 135d.

First, look-up tables are constructed for the first received segment 135a. For the first received segment 135a, it is assumed that the coder shift register 103 is initialised to zero at the start of processing (t=0) . Therefore, it is assumed that when the first bit vl of the first initial segment 137a enters the coder shift register 103 (i.e. input to the first flip-flop 105), the residual bits in the shift register 103 (i.e. the values of the second 107 and third 109 flip-flops) have values equal to zero (Figure 9) . The residual bits in the shift register 103 at the start of processing i.e. when the first bit vl of the first initial segment 137a enters the shift register 103, are herein termed the "starting residual bits" 139 for the first initial segment 137a. (Similarly, the residual values (not shown) in the shift register 103 when the first bit v5 of the second initial segment 137b enters the shift register 103 are the

"starting residual bits" for the second initial segment 137b, and so on) . In other words, it is assumed that upon the first clock increment, when the first bit vl of the first initial segment 137a enters the first flip-flop 105 of the shift register 103 and simultaneously the upper summation arm 111 performs the first summation, the value read out from the second flip-flop 107 is zero and the value read out from the third flip-flop 109 is zero. The value read out from the first flip-flop 105 is to be determined .

For the first received segment 135a, four look-up tables 141a, 141b, 141c, 141d are constructed (Figure 10, Column C) . To construct the look-up tables, first, each possible bit combination (Column A) for the first initial segment 137a is listed. The first initial segment has 2 l possible bit combinations, where l is the number of bits in the segment. In the example embodiment of the invention, l=4, giving 16 possible input bit

combinations .

The 16 possible input bit combinations are then re organised into four groups of four (Column B) . The grouping is made according to the bits which would be residual in the shift register 103 when the first bit v5 of the second initial segment 137b enters the shift register 103. The options for the bits which could be residual in the shift register 103 when the first bit v5 of the second initial segment 137b enters the shift register 103 are: [0 0], [0 1], [1 0], [1 1], These bit pairs are herein termed respectively the "final residual bits" 143a, 143b, 143c, 143d for the first initial segment 137a (Figure 11, which depicts by way of example, the shift register 103 having "final residual bits" for the first initial segment 137a of [0 1] 143b) . Thus, in the first grouping 145a, the last two digits of each of the four possible input bit combinations is [0 0]

(returning to Figure 10) . In the second grouping 145b, the last two digits of each of the four possible input bit combinations is [0 1] . In the third grouping 145c, the last two digits of each of the four possible input bit combinations is [1 0] . In the fourth grouping 145d, the last two digits of each of the four possible input bit combinations is [1 1] . From the four groupings 145a, 145b, 145c, 145d the four look-up tables 141a, 141b, 141c, 141d (Column C) are constructed, each look-up table containing the coder output values in "None Return to Zero" format which would be output for each of the possible input bit combinations (Column B) . The coder output bits can be determined since the coder model is known (or at least assumed) to be a convolutional coder [1, 2, 3] . Thus, in each look up table 141a, 141b, 141c, 141d for the first initial segment 137a, there are four possible output sequences, each having eight values (wherein each value is 1 or -1) .

Once the look-up tables 141a, 141b, 141c, 141d for the first initial segment 137a have been constructed, the first received segment 135a (which contains eight received data values 147a - 147h) may be decoded. The first received segment 135a is compared against the four possible output sequences in each look-up table 141a, 141b, 141c, 141d, to obtain a closest match 149a, 149b, 149c, 149d for each look-up table (Figure 12) . The closest match is determined by calculating whether each received data value 147a - 147h is closer to 1 or closer to -1 in a Euclidian sense, and then highlighting the corresponding cells in the appropriate column of each look-up table. The closest match for each look-up table is determined by selecting the row in each look-up table with the highest number of highlighted cells. If there are no errors in transmission, then it would be expected that every cell in a single row should be highlighted.

The more error that is introduced in transmission, the more the received values will deviate from their original 1 or -1 values, and therefore the more mis-matches there will be in the row with the highest number of highlighted cells. Introducing redundant cells helps to mitigate the risk of mis-identifying a closest match in the event of noisy transmission, since the ratio of mis-matches to matches will be smaller.

The four estimated encoded sequences (i.e. the closest matches) 149a, 149b, 149c, 149d for each look-up table 141a, 141b, 141c, 141d are stored in the memory 9 for later use, together with the corresponding first initial segment bit combinations (Figure 10, Column B) , the "starting residual bits" 139 (which for the first input segment are [0 0] for each estimated encoded sequence) and the associated "final residual bits" 143a, 143b, 143c, 143d (either [0 0], [0 1], [1 0] or [1 1], depending on the look-up table) .

Next, look-up tables are constructed for the second initial segment 137b. The second initial segment 137b also has four bits, and therefore also has 16 possible input bit combinations (Figure 13, Columns El, E2, E3 and E4) . However, in contrast to the first initial segment 137a, for the second initial segment 137b, there are four possible "starting residual bits" combinations: [0 0], [0

1], [1 0] and [1 1] (Columns Dl, D2, D3 and D4) which correspond to the four "final residual bits" combinations 143a, 143b, 143c and 143d of the first initial segment 137a. Ά set of four look-up tables are constructed for each "starting residual bits" combination, giving a total of 16 look-up tables.

The look-up tables for the second initial segment 137b are constructed as described above in relation to the first initial segment 137a, by repeating the

methodology for each "starting residual bits" combination (Columns Dl, D2, D3 and D4) .

For the first starting residual bits combination [0 0] (Figure 14, Column Dl), the possible bit combinations for the second initial segment 137b (Column El) are re arranged in four groups of four 150a, 150b, 150c, 150d (Column FI) and then corresponding look-up tables 151a, 151b, 151c, 151d (Column Gl) are constructed by applying the coder model to each of the possible bit combinations (Column FI ) .

Similarly, for the second starting residual bits combination [0 1] (Figure 15, Column D2), the possible bit combinations for the second initial segment 137b (Column E2) are re-arranged in four groups of four 153a, 153b, 153c, 153d (Column F2 ) and then corresponding look up tables 155a, 155b, 155c, 155d (Column G2) are constructed by applying the coder model to each of the possible bit combinations (Column F2) .

Similarly, for the third starting residual bits combination [1 0] (Column D3, Figure 16), the possible bit combinations for the second initial segment 137b (Column E3) are re-arranged in four groups of four 157a, 157b, 157c, 157d (Column F3) and then corresponding look up tables 159a, 159b, 159c, 159d (Column G3) are constructed by applying the coder model to each of the possible bit combinations (Column F3) .

Similarly, for the fourth starting residual bits combination [1 1] (Column D4, Figure 17), the possible bit combinations for the second initial segment 137b (Column E4) are re-arranged in four groups of four 161a, 161b, 161c, 161d (Column F4) and then corresponding look up tables 163a, 163b, 163c, 163d (Column G4) are constructed by applying the coder model to each of the possible bit combinations (Column F4) .

Thus in total, 16 look-up tables 151a-151d, 155a- 155d, 159a-159d, 163a-163d are provided for the second initial segment 137b, four for each "starting residual bits" combination. In each look-up table for the second initial segment 137b, there are four possible output sequences, each having eight values (wherein each value is 1 or -1) . Once the look-up tables for the second initial segment 137b have been constructed, the second received segment 135b (which contains eight received data values 175a - 175h) may be decoded (Figure 18) .

To decode the second received segment 135b, the second received segment 135b is compared against the four possible output sequences in each look-up table to obtain a closest match 165a-165d, 167a-167d, 169a-169d, 171a- 171d for each respective look-up table.

The 16 estimated encoded sequences representing the closest matches for each look-up table are stored for later use, together with the corresponding second initial segment bit combinations (taken from Columns F1-F4), the "starting residual bits" (177a-177d taken from Columns D1-D4) and the associated "final residual bits" (173a- 173d taken from Columns F1-F4) .

The third 135c and fourth 135d received segments are decoded in the same way as the second received segment 135b.

In the example embodiment of the invention, after decoding, there are four datasets stored for the first received segment 135a and 16 datasets stored for each of the second 135b, third 135c and fourth 135d received segments .

In an alternative embodiment, an artificial neural network implementation may involve using the neural network in place of the sequential decoding method using generated look-up tables. In such an embodiment, the artificial neural network would be trained to provide the estimated encoded data sequences directly from the received segments.

Step 3 - Decoded segment merging Once all received segments have been decoded, the datasets are retrieved from the memory 9 and the

estimated encoded sequences are merged, two at a time.

The first merge is between the estimated encoded sequences for the first 137a and second 137b initial segments. When merging, the "final residual bits" 143a- 143d associated with the estimated encoded sequences 149a-149d for the first initial segment 137a, are matched with the "starting residual bits" (177a-177d taken from Columns D1-D4) of the estimated encoded sequences for the second initial segment 137b (Figure 19) . Then, each pair of matched estimated encoded sequences is combined into a single sequence. The combined sequences are grouped according to the corresponding "final residual bits" 173a-173d of the estimated encoded sequences for the second initial segment 137b, to provide four groups of four merged sequences (Figure 20) . A closest match is then obtained by comparing the combined sequences in each group with the actual combined first and second received segments. Thus, four best-fit sequences each having 16 data values are obtained.

The second merge is between the four best-fit sequences just obtained, and the estimated encoded sequences for the third initial segment 137c. When merging, the "final residual bits" associated with the best-fit sequences (which correspond to the final residual bits" associated with the estimated encoded sequences of the second initial segment 173a-173d) , are matched with the "starting residual bits" (not shown) of the estimated encoded sequences (not shown) for the third initial segment 137c.

Then, each pair of matched best-fit and estimated encoded sequences is combined into a single sequence.

The combined sequences are grouped according to the corresponding "final residual bits" of the estimated encoded sequences for the third initial segment 137c, to provide four groups of four combined sequences. A closest match is then obtained by comparing the combined sequences in each group with the actual combined first, second and third received segments. Thus, four best-fit sequences each having 24 data values are obtained.

In a similar way, a final merge is undertaken with the estimated encoded sequences for the fourth initial segment, to provide four candidate sequences each having 32 values.

Step 4 - Final decoding

A final decoding step involves comparing each of the final 32-value candidate sequences with the entire received, encoded, data block, and selecting the closest fit. The selected closest fit sequence represents the encoded sequence most likely to have been output by the coder 101. The selected closest fit sequence can then be used to reverse calculate the initial data block using the known correspondence between the estimated encoded sequences merged into the selected candidate sequence, and the initial segment bits.

In the example embodiment of the invention, steps one to four of the method are performed using an

artificial neural network with a suitable architecture, for example an adaptive neural network based on the Multi Layer Perceptron architecture which uses forward

propagation to generate outputs that are then refined using back propagation through the network. Back propagation would typically involve a Stochastic Gradient Descent process, followed by an Adams optimization process . Such an artificial neural network is known in the prior art. In a first stage, the artificial neural network is trained. During training, the artificial neural network is supplied with noisy received sequences and correctly decoded sequences. The neural network approximates the estimated encoded sequences for each initial segment, and compares the approximated sequences against the actual estimated encoded sequences for each initial segment calculated by the processor 7 using the above methodology. It does this by updating neuron weights based on a comparison metric designed to minimise the difference between the approximated sequences and the actual estimated encoded sequences.

After training, the above-described method is performed using the artificial neural network, by inputting to the artificial neural network (i) the received encoded data block; (ii) the convolutional coder model; and (iii) the number of segments required. Use of an artificial neural network advantageously provides increased decoding speed.

Use of an artificial neural network also

advantageously provides increased adaptability. For example, when the radio is a software-defined radio, the radio may be easily adapted to process different encoded waveforms simply by updating the weights of the neural network .

In an alternative embodiment of the invention, the steps of the method are undertaken by the processing capabilities of the decoder, and not using a neural network .

Whilst the present invention has been described and illustrated with reference to a particular embodiment, it will be appreciated by those of ordinary skill in the art that the invention lends itself to many different variations not specifically illustrated herein. By way of example only, certain possible variations will now be described .

In one such variation, there are two flip-flops in the shift register. The first tap-off point of the upper summation arm is then the input to the second flip-flop, the second tap-off point of the upper summation arm is the output from the second flip-flop and the third tap- off point of the upper summation arm is the output from the third flip-flop. Such a configuration might be implemented if there are constraints on logic.

In another such variation, alternative forms of hardware are used. For example, field-programmable gate arrays may be used in place of flip-flops. In another such variation, the coding may be undertaken in software.

In another such variation, there may be a

significantly higher number of received segments, for example one hundred or one thousand segments.