Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SOFT METRICS COMPRESSING METHOD
Document Type and Number:
WIPO Patent Application WO/2014/029425
Kind Code:
A9
Abstract:
A signal method of processing in a receiver a signal that has been encoded and interleaved in a transmitter comprising: receiving a signal, process the signal to obtain a stream of soft metrics representing bit probability of symbols in a predetermined constellation; applying to said soft metric a compression operation that preserves the total length of each group of soft metrics relative to a same constellation symbol; rearranging the stream of compressed soft metrics so as to inverse the interleaving done in the transmitter.

Inventors:
BUTUSSI MATTEO (CH)
TOMASIN STEFANO (IT)
ROSATI STEFANO (CH)
Application Number:
PCT/EP2012/066286
Publication Date:
March 05, 2015
Filing Date:
August 21, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ALI EUROP SARL (CH)
BUTUSSI MATTEO (CH)
TOMASIN STEFANO (IT)
ROSATI STEFANO (CH)
International Classes:
H04L25/06; H03M7/40; H03M13/27; H03M13/45; H04L1/00
Attorney, Agent or Firm:
P&TS SA (P.O. Box 2848, Neuchâtel, CH)
Download PDF:
Claims:
Claims

1 . A signal method of processing in a receiver a signal that has been

encoded and interleaved in a transmitter comprising: receiving a signal, process the signal to obtain a stream of soft metrics representing bit probability of symbols in a predetermined constellation; applying to said soft metric a compression operation that preserves the total length of each group of soft metrics relative to a same constellation symbol; rearranging the stream of compressed soft metrics so as to inverse the interleaving done in the transmitter.

2. The method of the preceding claim, further comprising decompressing the compressed soft metrics; processing the decompressed soft metrics in a decoder to reconstruct a transmitted message. 3. The method of any of the preceding claims, comprising a step of

quantizing the soft metrics prior to the application of said

compression operator.

4. The method of any of the preceding claims, wherein the compressing operation includes the application of an entropy code. 5. The method of the preceding claim, comprising a step of substitution of compressed codes with other codes of shorter length representing a different symbol in the constellation when the total length of a group of soft metrics relative to a same constellation symbol exceeds a determined value.

6. The method of claim 3, wherein the compressing operation generates a prefix code.

7. The method of any of the preceding claim, wherein said soft metrics are represented as fixed-point numbers.

8. The method of claim 3, wherein the quantizing operation generates a constant number of bits for each group of soft metrics relative to a same constellation symbol.

9. The method of claim 3, where in the quantizing operation generates different number of bits for each soft metric relative to a same constellation symbol.

10. A method of compressing a stream of fixed-point representations of soft metrics in a receiver, comprising demultiplexing the soft metrics so as to obtain a plurality sub- streams having distinct statistics distributions. applying to each sub-stream an entropy code adapted to its statistic distribution.

Description:
SOFT METRICS COMPRESSING METHOD

Field of the invention

[0001] The present invention relates to methods for representing and compressing soft metrics in communication systems based on channel coding. In particular, but not exclusively, the present invention relates to signal de-interleaving in a receiver, for example an OFDM receiver.

Description of related art and definitions

All modern digital communication systems use channel coding to protect data and allow a better reception. Such is the case, to name just a few examples, of the several Digital Audio and Video Broadcast standards available (DAB and DVB), of wireless networks, including WiFi and

Bluetooth in their various implementations, and of modern cellular phone communication systems.

It is customary, in these communication systems, to apply several

permutation operators to the data stream. Such permutations, generally indicated as interleaving, are often introduced at the transmitter side and have in general the effect of improving the communication bandwidth and reducing the error rate. According to the cases, interleaving can take place at bit or symbol level, or both. Interleaving introduced in at the transmitter side must in general be undone by a corresponding inverse operation of deinterleaving in the receiver to allow the reconstruction of the original signal.

Known implementation of interleaving and deinterleaving require storing in a memory a sequence of data whose length is equal to the period of the interleaving operator. Since newly proposing communication standards advocate the use of interleaving operator of increasing complexity and length, interleaving and deinterleaving operation place a heavy burden on the memory resources. There is therefore a need of a interleaving and/or deinterleaving method that is less memory demanding than the methods of the art.

Brief summary of the invention

According to the invention, these aims are achieved by means of the object of the appended claims.

Brief Description of the Drawings

The invention will be better understood with the aid of the description of an embodiment given by way of example and illustrated by the figures, in which:

Fig. 1 shows in a schematic fashion a generic communication system using channel coding techniques.

Fig. 2 shows in a schematic fashion a more detailed description of the modulator 3 of Fig. 1 .

Fig. 3 and 4 represent schematically two possible constellat used by communication system.

Fig. 5 shows in a schematic fashion one possible implementation of demodulator 5 of Fig. 1 .

Fig. 6 and 7 illustrates schematically a diversity transmission system in which a transmitter transmits a signal that is received by two receivers and combined to improve the system reception quality.

Fig. 8 represents schematically a variant of the invention.

Fig. 9 shows in a schematic fashion a possible representation of the compression unit 302 of fig. 8. Fig. 10 represents schematically a possible structure of

decompression unit 304 of figure 8.

Fig. 1 1 and 12 are schematic examples of possible integrations of the invention in the demodulation process, for example in the demodulator 5 of Fig. 1 .

Fig. 13 and 14 shows in a schematic fashion the integration of the invention in a diversity system akin to those represented in Fig. 6 and 7.

Detailed Description of possible embodiments of the Invention This invention concerns methods and devices to represent, compress, decompress and de-represent data that must be processed by a channel that allows data to be de-compressed, such as, for example : (a) a channel that permutes the order of an incoming signal ,(b) a memory where the data are written and then read in another given order, (c) a communication channel. Fig. 1 shows a possible schematic representation of a digital communication system that is composed by a data source 1 generating data belonging to a given finite set. The finite set is usually a binary-valued set. In the following the data generated by the data source will be denoted by d.

Functional block 2, or encoder, is adapted to map the d-data generated by source 1 on words of a given error correcting code. In the field of digital communication system many error correcting codes can be used as: Low Density Parity Check Codes (LDPC), convolutional codes, block codes, etc. The mapping of d in words of error-correction code field is named

encoding. In the following the output of block 2 will be indicated by c. The encoded data c are modulated by modulator 3. The transmitter 8 comprises the sequence of the three blocks 1 , 2 and 3. Its output, denoted as x, goes through a transmission channel 4 that could be a radio propagation process, a cable transmission, or also a generic operator. The signal emerging from the other side of channel 4 is collected by receiver 9, the received signal is denoted by r. The received signal is processed by block 5that performs the demodulation of the received signal. The output of Block-5 is then processed by block 6, which performs decoding. The output of Block-6 is an estimate of the transmitted data d The c-stream is transformed by modulator block 3 in another format suitable for transmission. In the following the process performed by Block-3 will be also named modulation. Many techniques can be used for

addressing this goal. Nevertheless most of them can be represented as reported in Fig. 2 that illustrates schematically one of the possible

implementations of the modulation process.

Fig. 2 shows in a schematic fashion a possible structure of the modulator 3 in the transmitter of Fig. 1 . Block-3 can be modelled as the cascade of four sub-blocks. The fist sub-block 31 performs the interleaving process. Block-31 performs a permutation of the input signal. The output of Block-31 , denoted by a, is processed by mapping block 32. Block-32 maps signal a on the point of a defined constellation that depends on the transmission system in use. The output of Block-32, denoted by z, is processed by the interleaver 33. Block-33 performs a permutation over z and generates signal w that is transformed by Block-34 into a physical signal carrying that constitutes the output x of the transmitter 8 and will be also named transmitted signal. Block 34 could be for example an OFDM modulator, and possibly include a suitable RF interface.

The interleaver block 31 takes a set of the values carried by the c-stream and performs a permutation on it. Denoting the output of Block-30 by a the interleaving rule can be written as follows: a;,. ^ Ci , j = n(i) , (1) where c £ is the i-th value carried by the c-stream, a is the ; ' -th value carried by the a stream and π is a function specific to the chosen modulation standard that defines the permutation performed by Block-30 and it. Since at this stage both signals, c and a, carry binary values (bits), the process performed by Block-31 is also named bit-interleaving.

The schema represented in Fig. 2 is not the only available way to perform the modulation process and different solutions are possible. For this reason, the blocks 31 , 32, 33 and 34 can be optional and their sequence can be rearranged in different way.

The mapper block 32 maps the bit carried by the a stream to a finite set of complex numbers also named constellation. Block-32 takes subsets of values carried by a and associates to them a value of a given constellation. Let z be the output of block 32 and z k its k - th element. z k is an element of a specific constellation set C that can differ from standard to standard and from transmission to transmission. Indicating with m the rule used in block 32, the relation between a and z can be written as follows: z k = m(a i5 a i+ i , ia i+ M-i ) where a a i+l , a i+M -i e B and z k e C . (2) a {i+1 }, a {£+M _ 1} is the sub-set of values carried by the a-stream and z k is the value in which they are mapped. In most digital communication systems, z k is a complex number. Possible values of z k depend on the considered digital communication system.

Fig. 3 and 4 show two constellations used in the DVB-C2 Standard. The constellation reported in Fig. 3 is well known as QPSK. In QPSK, the parameter M, defined in Eq. (2), is equal to 2 and the m-function,

introduced in Fig.3 is defined by the correspondence table:

(<¾, <¾+l) z

(0, 0) 1 + J

(0, 1) i - i

(1, 0) -i + j

(1, 1) - - 3 where j is the imaginary unit. The map of Fig. 4 maps 4-uples of bits in complex numbers. ( 1 0 0 0 ), for example is mapped in -3 + j ' 3, (0 1 1 1 ) is mapped in 1 -j, and so on. It is worth noting that these constellations are being superseded, in more recent transmission standards, by larger ones.

The z stream is then processed by the interleaving block 33, which

performs, similarly to Block-31 , a permutation on values carried by the z stream. The process performed by Block-33 can be written in mathematical form as follows:

Wj = Zi , j = r(i) , (3) where z £ is the i-th element of the input, wj is the i-th element of the output and τ is a given permutation specific to the considered system.

The transmitted signal is modified by the environment or channel 4 visible in fig. 1 before reaching the receiver. In general, the channel introduces distortions of the transmitted signal and noise. It is usually described as a time-varying filter applied to the transmitted signal and the output of the filtering process is then added to an external signal describing noise and interference from other sources.

In the receiver 9 the received signal r emerging from the channel 4 is presented to demodulator block 5. Fig. 5 shows one possible

implementation of the demodulation process implementer by block 5. The received signal, r, is processed by three blocks: Block-53, Block-52 and Block-51 . Optional block-51 can transform or refine the received signal, change its representation and/or domain, and/or apply other operators, not described here. In OFDM systems the received signal is transformed time- domain to frequency domain. The output of Block-51 will be denoted by . Preferably the receiver performs also in block 52 a channel estimation. The output of the channel estimation is an estimate of the filter frequency or time response that characterizes the channel and will be denoted by h. Preferably the receiver performs also noise estimation in block-53, The output of Block-53 estimates the power of the noise that affects the receiver signal and will be denoted by σ 2 .

The meaning of the three signals , h and σ 2 can be summarized by the following equation: ~ hw + n E[n 2 } — σ 2 , (4) where n is a Gaussian variable having power equal to σ 2 , (E[n 2 ] = σ 2 ).

The three signals f , h and σ 2 are processed by Block-54 that introduces a given permutation on each of input signals. Block-54 generates three different outputs: T , the permuted version of the f signal; h TI the

permuted version of the h signal; and σ 2 , the permuted version of the σ 2 signal. The outputs of Block-54 are used in demapper block 55 to generate estimates of the probabilities of the transmitted signal a. The output of Block-54 will be denoted by P. Signal P is processed by interleaver Block-56, which performs a permutation of the values carried by a and generates signal P.

In practical implementations f, h and σ 2 are signal represented by a finite number of bits. Let B fl B h and Β σ 2 the number of bits used to represent respectively signal f, h and σ 2 . This means that the connection between Block-51 and Block-54 carries S f -bits, the connection between Block-52 and Block-54 carriers and the connection between Block-53 and Block-54 carries S CT 2-bits. The three-pie ( , h, σ 2 ) is processed by Block-54 as reported in the following equation: T(i) = f(j) h T (i) = h(j) σ τ 2 (ι) = σ 2 { 3 ) ι = (5) where f xi h T and σ 2 are the three signals generated by Block-54, τ _1 is the inverse of the τ permutation used at the transmitter side by Block-33. The three-pie ( T , h T and σ τ 2 ) is processed by the demapper 55 whose goal is the computation that a given transmitted bit in a received symbol be '0' or Ί '. More formally, the demapper 55 provides the probability of a t = a for every i and for every a ε S:

In the following for the sake of simplicity the above reported probability we will be also denoted by Ρ έ (α).

In the usual case of binary transmitted values Έ = { 0,1 }, block-55 must compute two probability values for each a t transmitted value: Ρ έ (0) and P [ (l). Probabilities P;(0) and P; (l) are commonly indicated as 'soft metrics' and in case of binary values they can be represented by a unique value named Log Likelihood Ratio:

Before starting the decoding in block-6, the receiver 9 must re-organize the sequence {Ρ έ (α)} έ . Block-56 re-arranges the sequence in the following way:

P j (a) = P i (a j = %- 1 (i) (8) where π HO is the inverse of the π-permutation used at the transmitter side by Block-31 .

To perform the permutations π _1 (0 and τ _1 (0 , Blocks 54 and 56 need memory. In all the systems the permutations π and τ are applied on a finite sequence: π(ί) and ""1 (i) are defined for i = 1, . . . , Ν π (9) r(i) and τ are defined for i— 1, . . . (10)

The memory used by Block-54 and Block-56 depends on Ν π and N T .

Assuming to efficiently implement the de-interleavers performed by Block- 54 and Block-56, the memory used by Block-54 and by Block-56 is

respectively composed by Ν π and N T words.

For Block-54 one word is represented by three-pie ( , h, σ 2 ). Hence the memory used by Block-54 depends on the number of bits used to represent the three signals , h and σ 2 . Let B r , B h and Β σ be the number of bits used to represent the three signals, it follows that the memory used by Block-54 is equal to:

{B r + B h + Β σ ) x Ν π bits . (11)

For Block-56 one word is represented by the a K ' -dimensional vector or K-up\e [ Ρ έ 2 ), ... P;( fe ) ], where K is the cardinality of 2 and a lf a 2 ,

... a k are all the possible elements of Έ. Using B P bits for the representation of the generic P;( ) value, it follows that the memory used by Block-56 is equal to:

In the case of a binary S-alphabet and using the LLR representation it follows that the deinterleaving operation performed by Block-56 is based on words of one single value. Assuming B LLR bits for the representation of the LLR, it follows that the memory used by Block-56 is equal to:

M = B LLR x N T bits (13) Another field in which the invention is intended to be used is illustrated in schematic form in Fig. 6 and in Fig. 7. Figs 6 and 7 show in a schematic fashion a family of process known as Diversity Combining.

Fig. 6 illustrates schematically a diversity transmission system in which a transmitter 104, transmits a given signal 105 that goes through the channel 103, is received by two receivers 100 and 101 that transmit information useful for the reception to the combiner 102. Block-100 and Block-101 can be arranged to transmit and/or receive one or more of the signal present in the demodulator 5 of Fig. 5, for example the de-interleaved sequence P generated by de-interleaver 56, the received signal r, the processed signal , the signal f T provided by de-interleaver 54 or the demapped stream P. The combiner 102, receives these signals and estimates the transmitted data d. Fig. 7 shows an alternative architecture in which one receiver 200, transmits information useful for the reception to an unit 201 that unites the roles of receiver and combiner. The information transmitted by Block- 200 is the same kind of information transmitted by Block-100 and Block-101 to Block-102. Block-201 uses this information to estimate the transmitted data d.

Block-54 and Block-56 of Fig. 5 are two different examples of applications of the invention. Another examples of channels to which the invention is applicable are, in Fig. 6 and 7, the transmission of information from Block- 100 and from Block-101 to Block-102, respectively from Block-200 to Block- 201 .

Fig. 7 shows another diversity transmission system in which a transmitter, 204 transmits a given signal 206 through the channel 203. The information, differently altered by the channel 203, is received by blocks 200 and 201 . Block-200 transmits to Block-201 information about Signal-205. Block-201 , analyzing Signal-202 and the information received by Block-200, performs an estimation of the transmitted information. Fig. 8 shows in a schematic fashion a possible variant of the invention that encompasses four different phases. In the inventive signal processing method an input signal 305 is fed to a representation conversion block-301 that changes representation of the values carried by signal 305 in another format. The format change can be a permutation, an interleaving, a mapping, or a general conversion

operation, represented by a suitable operator, and may also in cases cause information loss.

The first phase, see Block-301 , is the change of the representation used of the incoming signal. Signal-305 is represented using a given number of bits In most of the systems each element of Signal-305 is represented using a constant number of bits. Indicating with S^ 05 ^ the generic n-th element of Signal-305 and by S^ 305) the number of bits used to represent it, it follows that: β(306) = β (305) = β (305) ν ^ ^ (14)

Block-301 changes the representation used for Signal-305 and generates Signal-306. Possibly, the number of bits used for the representation of Signal-306 is not constant through the stream. Denoting by S^ 06 ^ the generic n-th element of Signal-306 and by B^ 06 ^ the number of bits used to represent it, it can happen that:

Relaxing the constraint on the constant number of bits used for the signal entering Block-301 allows to optimize the total number of bits used of the representation of Signal-305. The optimization of the used bits depends on the nature of Signal-305. Section " LLR Quantization " reports a possible bit- width optimization in case of Signal-305 carrying LLR values. The representation conversion is followed by a compression step carried out by Block-302 that generates Signal-307. Signal-306 is then compressed by Block-302. The compression is designed taking into account the statistic of the elements of Signal-306. It can be applied on each value 5^ 306) or on words composed by M elements of Signal-306. Let w^ 306 ^ be the generic word composed by M elements of Signal-306, which can be expressed in the mathematical form as follows:

(306) r,(306) ,(306) ,(306)

w,

(i) 5 (16) where w ' is a vector composed by M elements and ji_ (.Q,j 2 (.Q>■■■ 7M( are the indices of the S (306;) -values that compose the word w 06 -* . A simple solution to generate the word w 06 - 1 could be to take M consecutive elements of Signal-306. In that case the word can be written as follows:

(306) Q (306) c (306) ,(306)

w. J iM > ^iM- M+M-l (17)

Moreover the compression code applied on the n-th value, „ ' , is in general different from the compression code applied on the m-th values ς,(306) Fig. 9 reports in a schematic fashion a possible implementation of Block- 302. Block-302 can perform C different compressions. This means that Block-32 is able to apply C different compression codes. Each compression code can be pre-computed or dynamically adapted to the incoming signal. Each compression code is designed for a signal having a given statistical description.

Selector block-3021 is the first stage of the compressor Block-302. It assigns the values carried by Signal-306 to the C different compression codes available, each represented by one of the blocks 3022 to3024. The outputs v( 2 ... of the compression codes are merged in a single signal by de-selector block-3025. The generated signal, Signal-307, is the output of Block-302. In case of pre-computed compression codes, the assignment performed by Block-3021 is done in such a way that the signal, at the input of the n-th compression code, fits as much as possible the statistical description for which the n-th compression code has been designed. In case of adaptive compression codes the assignment is done in such a way that the signal at the input of each compression code will be as much as possible non- uniformly distributed. The goal of Block-3021 is to guarantee a signal, at the input of each compression, having a statistical description suitable for an efficient compression code design. In the following, the input of i-th compression code will be denoted by v i , and its n-th element by v l (ri) . Signal v l can be composed by a sequence of elements of s^ 306 ) -values or by a sequence of elements of w^ 306 The output of the compression codes is then rearranged by Block-3025 in the inverse order used by Block-3021 to assigned the values of Signal-306 to the different compression codes.

Different kinds of compression codes can be used for the compression, including (but not only) entropy coding algorithms and dictionary-based algorithms. If the distribution of the symbol values is known beforehand, arithmetic coding (or Huffman coding) is very well suited. Some compression codes, in particular entropy coding generate code words of variable length, and it is difficult to guarantee that code words

generated in correspondence to unusual combinations of input do not exceed a given maximum length. Preferably the present invention proposes a process to limit the length of the word generated by the compression code.

Let V t be the alphabet of the words at the input of the i-th compression code. Let C t the rule used the i-th code to generate the output words: v® (n) = Ci{v {n)) v (n) E V, (18) where (n) is the n-th word at the input of the i-th compression code. Let V t be the set of the words generated using the above reported rule:

¾ = { v (n) I = Ci(v (n)) V υ (ή) G V t } (19)

The i-th compression code is designed to compress as much as possible the input . Nevertheless it could happen that some words of the set V t exceed the maximum length L .

Vl L) = { v \ v e Vi and length[C^)] < L t } (21)

Let v L ^ be the subset of V t that does not generate words exceeding the maximum length: = { v I v £ V t and length^ (t>)] < } . (21)

The i-th code analyzes the output generated by v (£) (n) using the rule C £ ; if the output exceeds the maximum length the code modifies the input, from v (£) (n) to # (£) (n), in such a way that the output, generated by # (£) (n), has the wanted length. That means that £ £ (n) must be in v L

The generation of v^ n) must take into account system performance and length constraints. The use of v^ n) in place v^ n) must generate the performance loss as small as possible. The technique used to map a given v^ n) into v^ n) depends on the system on which the present invention is applied. The impact of using v^ (n) in place ν® (η) can be represented as a cost function that must be minimized. It follows that v is selected using the following rule: where / is a generic cost function and v^ (n) is the element of v L) that minimizes the cost function given (n).

A simple example of cost function is the distance function.

Another constraint can be imposed on the total lengths of the words generated by the C compression codes. At a given instant n the sum of the length of the words generated by the C compression codes cannot exceed a given value. This constraint can be written in a mathematical form as follows:

c

It could happen that a given instant n the above reported constraint is not verified. In that case the invention changes the values

that have generated the too long sequence in such a way that the new generated sequence will have the right length. The change of the sequence (n), ... , v (c) (n) must take into account the system performance and the length constraint.

A compression method subject to the constraint expressed by equation (23) is particularly useful in de-interleaving a stream of soft metrics in a receiver, for example. In this case, as it will be seen further on, the constraint 23 can be enforced to ensure that the total length of each group of compressed soft metrics relative to a same constellation symbol be preserved. Thanks to this feature, the compressed soft metrics can be de-interleaved as easily as the uncompressed ones, albeit with reduced memory usage.

The goal of both Block-301 and Block-302 is to reduce the number of bits used to represent the values carried by Signal-305. A set of M values carried by Signal-305 is represented using:

[£(305) χ j bjts _ (24) The number of bits used by Signal-307 to carry the same information can difficulty written in close form. Nevertheless, considering the worst case, which all the words generated by the i-th code have maximum length, it follows that the number of bit used by Signal-307 to carry the same information is upper bounded by the following equation:

c where M t is the number of words generated by the i-th compression code for the process associated to the M values carried by Signal-305 and B^ Li) are the number of bits used to represent to longest code word generated by i-th compression. A good constraint to guarantee that the invention generates less bits than the bits used in Signal-305 is to impose that:

(26)

The compressed signal-307 goes then through a channel 303 that might be a physical propagation channel, but also a generic operation on the signal, for example a non-distortion process, from which it emerges as signal 308, and is further processed by Block-304. Block-304 performs inverts the compression step previously applied by Block-302 in order to generate Signal-309 in the same format as Signal-306.

Signal-308 is decompressed by Block-304, Block-304 performs the inverse of process previously performed by Block-302. Fig. 10 reports in a schematic fashion a possible implementation of Block-308. Analogously to the compressor of fig. 9, there are C de-compression codes: Block-3042, Block- 3043, ... Block-3044. The assignment rules applied by Block-3041 are the inverse of the rules used by Block-3025. The outputs of the de-compression codes are merged in a single signal by Block-3045. The assignment rules applied by Block-3045 are the inverse of the rules used by Block-3021 .

Signal-308 is split in C different signals: e lt e 2 , ... e c . The rules used by Block- 308 permit to re-build at input of the i-th de-compression code (Block-3042, Block-3043, ... Block-3044) the code words previously coded by the i-th compression code.

The generic signal e ® is processed by the i-th de-compression code. The i-th de-compression code performs the inverse of the mapping performed by the i-th compression code. Denoting by e® (n) and by e® (n) the input and the output of the i-th de-compression, the process performed by Block-3042, Block-3043, ... Block-3044, can be written in mathematical form as follows: where C t 1 is the inverse of the C r function reported in Eq. 18.

The e £ signals are reordered by Block-3045 which performs the inverse of the process previously executed by Block-3021 . The output of Block-3045 is the Signal-309.

The last step is to represent Signal-309 in a format coherent with the representation used for Signal-305. This task is performed by Block-305 which performs the inverse of the process previously performed by Block-

301 .

LLR Quantization

This section focuses on Block-301 in case of a Signal-305 carrying LLR values. In such case Signal-305 is a sequence of LLR values. The n-th value of Signal- 305 is a LLR value associated to a transmitted/received bit.

All the LLR are represented using the same number of bits. Block-301 changes the representation of the n-th LLR values.

The change of the representation is based on the position in the

constellation of the bit carried by the LLR value. Signal-305 can be divided into groups of M elements, sj 05 with k =

1, 2, ... , M, where M is the number of bits associated at each received constellation point. The set:

' Q (305) Q (305) Q (305) , n ^

is the set of the LLRs of the bits associated at the same received/transmitted constellation point.

Block-301 quantizes Signal-305 by a quantizer, where to each input value s Jfk 305 , one of the G k = 2** quantized values - where £ k K = 5 Jpk 06) is the number of bits used to describe each level - is associated. Various

techniques can be adopted for this quantization procedure. The process performed by Block-301 can be based on uniform or non-uniform

quantization techniques.

In a possible embodiment, each quantized value is associated to a

quantization interval [ν ¾ ν--1 , ν ¾ ν ], where v fe 0 = -∞ and vk,G k = 00 a nc l t ne intervals are a partition of the real numbers. In the case of a uniform quantizer, v {fe v ^ = A k v for v = l, ... , G k - l where Δ is a positive real number representing the quantization step.

Then, the value associated to s Jk 05) is the index of the interval in which

5 Jpk 05) falls. The q ^uantized LLR is therefore

¾ 306) = : S05) ≡ H,* - = QiS^) , ( 29 ) where Q is the quantization function, denoting the process performed by Block-301 .

Various approaches can be followed for the choice of the quantization interval edges v k v . One possible implementation provides that they are chosen according to the statistics of Signal-305 in order to maximize the generalized mutual information between the transmitted bits and Signal- 306, which provides the maximum achievable for given quantization choice. Considering a decoder having as input the quantized LLR, and assuming equiprobable inputs, the generalized mutual information can be written as

£(306) = ; 5 (305) G [uk ^ _ Vk ^ ] } = Q {S (30 5) )

with

GMI n ( 306) ) = 1 - ∑ is 06) = v, b k = 0) log (1 + e- A <"> « ) +

v=i Z (31)

where = v and

b j k = b, and λ(ν) is the quantized value associated with v. The maximum GMI is achieved when for any c > 0, and in this case the generalized mutual information coincides with the mutual information and the corresponding transmitted bit, i.e.

GMI n ( 306 )) = 306) )

where p(¾- fe 306) = v) is the probability that S_(/Jc) A ((306)) = ΙΛ

In a possible embodiment, the quantization process could be designed in such a way that it maximizes the sum of the mutual information of the LLR associated to the same transmitted/receiver constellation point, under a constraint on the total bits used for the set reported in Eq. (28):

Assuming an uniform quantization, the solution of the above reported problem needs the computation of M quantization steps, Δ 1; ... , Δ Μ , and M bit-width, i x , M . Different techniques can be used to solve Eq. (34). The quantizing operation generates a constant number of bits for each group of soft metrics relative to a same constellation symbol or different number of bits for each soft metric relative to a same constellation symbol.

Assuming to use a 1 6-QAM constellation, each constellation point carries 4 bits (M-4). Let's assume that B totl the number reported in Eq. (34) is equal to 1 6, ( B tot = 1 6 ). In case of constant bit-width it follows that : B^ = B j2 = B j3 = B j4 = 4. Note that B + B j2 + B j3 + B j4 = 16. Otherwise, it could happen, that maximising the mutual information, see Eq (34), the bit- width is not constant, example : B^ = S, B j2 = S, B j3 = 3 and B j4 = 3. Note, that also in this second case the constraint is satisfied, B^ + B j2 + B j3 + B j4 = 16.

Compression

As an embodiment of compression, Block-302 provides the use of Huffman coding on e caacuhi e clicemi i iecni t 5,. ' . In this case the size of the word associated with V is log (35) where p v k ) is the probability that ^ 306)

If lenethM < L; (36) then v t = v and in this case we have lossless compression and the only penalty for the system performance is the quantization process. Otherwise, when length [17] > L it we replace some quantized LLR value. In this case, the compression introduces a further distortion in the representation of the LLR, beyond that of the quantizer. The choice of the LLRs to be substituted and their replacement will have an impact on the system performance. Note that while quantization and entropy coding are performed for each bit separately, the compression is done on the ensemble of the LLRs of all the bits. This problem can be seen as a multidimensional multiple choice knapsack problem. Unfortunately, this problem is NP hard problem, thus a possible embodiment of provides the use of a greedy approach for the compression.

We consider the following iterative procedure:

1 . Let f A ((l)) (n) be the quantized indices values obtained by Block-

301 .

2. Initialize v^{n) = ν (ή , ί = 1, ... , M.

3. If (36) is satisfied, terminate the process.

4. Otherwise, find

A possible expression of the cost function / is the Ml loss, i.e

1 Gfe -1

/ (*, ΐ>« (n)) =∑∑ P(¾ 306) = « (i) («)A* = 6) l . (306)

6=0 v=l p(¾^ = i« ( ))

1 Gfc -1

∑∑ 6) = * (<) (") A* = 6) + P(S^ = *> k = &)] (38)

6=0 ¾=1

In this case, among all quantized value that have a given length, the one providing the highest Ml is selected.

Example I

The invention can be applied to the demodulation process. Fig. 1 1 shows in schematic fashion a variant of the demodulation process of Fig. 5.

Demapper 55 generates a stream P of soft metrics representing, for example the probability that a bit in the transmitted signal stream be '0' or . Optional block 301 represents a quantizer unit, or any other suitable process block that transforms the representation of the soft metrics generated by the demapper, and then compressed by compressor unit 02. The output of Block-32 is then processed by Block-56. Since the number of bits used to represent Signal-307 are less than the number of bit used for the P signal, the de-interleaver 56 of this variant of the invention uses less memory. The output of the de-interleaver 56 is then decompressed by Block-304 and optionally further processed by Block-305, for example to change or adapt its representation, according to the needs.

Fig. 12 represents a further variant in which the quantizer 301 directly integrated in the demapper, Block-550.

Example II

The invention can be applied to a diversity receiver akin to that

represented in Fig. 6. Fig. 13 shows in a schematic fashion a this variant of