Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
DISTANCE AND DISTORTION ESTIMATION METHOD AND APPARATUS IN CHANNEL OPTIMIZED VECTOR QUANTIZATION
Document Type and Number:
WIPO Patent Application WO/1999/038261
Kind Code:
A1
Abstract:
A channel optimized vector quantization apparatus includes means (40) for weighting a sample vector x by a weighting matrix A and means (44) for weighting a set of code book vectors &ccirc ¿r? by a weighting matrix B. Means (30W) form a set of distance measures {d¿w?(Ax,B&ccirc ¿r?)} representing the distance between the weighted sample vector Ax and each weighted code book vector B&ccirc ¿r?. Means (34W) form a set of distorsion measure {$g(a)i(x)} by multiplying each distance measure by a channel transition probability p¿r|i? that an index r has been received at a decoder when an index i has been sent from an encoder and adding together these multiplied distance measures for each possible index r. Finally means (36W) determine an index i¿min? corresponding to the smallest distortion measure $g(a)i(x) and represents the sample vector by this index i¿min?.

Inventors:
NYSTROEM JOHAN
SVENSSON TOMAS
HAGEN ROAR
MINDE TOR BJOERN
Application Number:
PCT/SE1999/000023
Publication Date:
July 29, 1999
Filing Date:
January 12, 1999
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
ERICSSON TELEFON AB L M (SE)
International Classes:
H03M7/30; (IPC1-7): H03M7/30; G10L3/00
Foreign References:
EP0632429A21995-01-04
Other References:
ISSPA 96, Volume 2, August 1996, HONG XIAO et al., "Channel Optimized Vector Quantization with Soft Input Decoding", Section 2.
IEEE TRANSACTIONS ON INFORMATION THEORY, Volume 36, No. 4, July 1990, NARIMAN FARVARDIN, "A Study of Vector Quantization for Noisy Channels".
Attorney, Agent or Firm:
Mrazek, Werner (Aros Patent AB P.O. Box 1544 Uppsala, SE)
Download PDF:
Claims:
CLAIMS
1. A distance estimation method in channel optimized vector quantization, characterized by: weighting a sample vector x by a first weighting matrix A; weighting a code book vector 6r by a second weighting matrix B different from said first weighting matrix A, at least one of said weighting matrices A, B being different from the identity matrix; and forming a distance measure dw (Ax, Ber) representing the distance between said weighted sample vector Ax and said weighted code book vector BCr.
2. A distortion estimation method in channel optimized vector quantization, characterized by: weighting a sample vector x by a first weighting matrix A; weighting a set of code book vectors 6r by a second weighting matrix B different from said first weighting matrix A, at least one of said weighting matrices A, B being different from the identity matrix; forming a set of distance mesures dw (Ax, Bcr) representing the distance between said weighted sample vector Ax and each weighted code book vector BER; forming a distortion measure ai (x) by multiplying each distance measure by a predetermined channel transition probability prli that an index r has been received at a decoder when an index i has been sent from an encoder and adding together said multiplie distance mesures for each possible index r.
3. A channel optimized vector quantization method, characterized by: weighting a sample vector x by a first weighting matrix A; weighting a set of code book vectors 6r by a second weighting matrix B different from said first weighting matrix A, at least one of said weighting matrices A, B being different from the identity matrix; forming a set of distance mesures dw (Ax, Bcr) representing the distance between said weighted sample vector Ax and each weighted code book vector BCr; forming a set of distortion mesures foci (x)} by multiplying each distance measure by a predetermined channel transition probability prli that an index r has been received at a decoder when an index i has been sent from an encoder and adding together said multiplie distance mesures for each possible index r; determining an index imin corresponding to the smallest distortion measure ai (x); and representing said sample vector by this index iman.
4. The method of claim 1,2 or 3, characterized in that only one of said first weighting matrix A and said second weighting matrix B is different from the identity matrix.
5. The method of claim 1,2 or 3, characterized in that each weighting matrix A, B is sample vector independent.
6. The method of claim 1,2 or 3, characterized in that each weighting matrix A, B is time independent.
7. The method of claim 1,2 or 3, characterized in that each weighting matrix A, B is constant.
8. The method of claim 1,2 or 3, characterized in that said sample vector x and said code book vector 6r are of the same length.
9. The method of any of the preceding claims, characterized in that said distance measure dw (Ax, B6,) is a weighted squared Euclidean distance mesure.
10. A distortion estimation method in channel optimized vector quantization, characterized by calculating a distortion measure ai' (x) in accordance with the expression where x is a sample vector, A is a sample vector weighting matrix, B is a code book vector weighting matrix, at least one of said weighting matrices A, B being different from the identity matrix, i is an expected i: th reconstruction vector, tu ils a conditional i: th code book correlation matrix. T denotes transposition, 1 denotes a vector consisting of all ones, and denotes elementwise multiplication.
11. A channel optimized vector quantization method, characterized by calculating a set of distortion mesures {a ; (x)} in accordance with the expression where x is a sample vector, A is a sample vector weighting matrix, B is a code book vector weighting matrix, at least one of said weighting matrices A, B being different from the identity matrix, zizi is an expected i: th reconstruction vector, (Di is a conditional i: th code book correlation matrix. T denotes transposition, 1 denotes a vector consisting of all ones, and <BR> <BR> <BR> (D denotes elementwise multiplication;<BR> <BR> <BR> <BR> <BR> determining an index imin corresponding to the smallest distortion measure (Xi' (x); and representing said sample vector by this index iman.
12. The method of claim 10 or 11, characterized in that said expected i: th reconstruction vector is defined as where dr is the r: th code book vector, and p, li is a predetermined channel transition probability that an index r has been received at a decoder when an index i has been sent from an encoder.
13. The method of claim 10 or 11, characterized in that said conditional i: th code book correlation matrix is defined as where dr is the r: th code book vector, and prla is a predetermined channel transition probability that an index r has been received at a decoder when an index i has been sent from an encoder.
14. The method of claim 10,11,12, or 13, characterized by precomputing and storing the quantities2jui and q) i; and retrieving these stored quantities each time a distortion measure ai' (x) is to be calculated for a given sample vector x.
15. The method of claim 14, characterized in that only said code book vector weighting matrix B is different from the identity matrix.
16. The method of claim 14, characterized in that said sample vector weighting matrix A and said code book weighting matrix B are the same matrix.
17. The method of claim 14, characterized in that said sample vector weighting matrix A is diagonal.
18. The method of claim 14, characterized in that said code book vector weighting matrix B is diagonal.
19. The method of claim 10,11,12, or 13, wherein only said sample vector weighting matrix A is different from the identity matrix, characterized precomputing and storing the quantities 2µi and 1T#i1 ; and retrieving these stored quantities each time a distortion measure a (x) is to be calculated for a given sample vector x.
20. The method of claim 10,11,12, or 13, wherein said code book vector weighting matrix B is constant and different from the identity matrix, characterized by precomputing and storing the quantities2Bpi and 1T(#i # (BTB)) 1 ; and retrieving these stored quantities each time a distortion measure αi'(x) is to be calculated for a given sample vector x.
21. The method of claim 10,11,12, or 13, wherein said sample vector weighting matrix A and said code book vector weighting matrix B are both constant and different from the identity matrix, characterized by precomputing and storing the quatities 2ATBµi and 1T (c; and retrieving these stored quantities each time a distortion measure αi'(x) is to be calculated for a given sample vector x.
22. A distance estimation apparats in channel optimized vector quantization, characterized by: means (40) for weighting a sample vector x by a first weighting matrix A; means (44) for weighting a code book vector du boy a second weighting matrix B different from said first weighting matrix A, at least one of said weighting matrices A, B being different from the identity matrix; and means (30W) for forming a distance measure dw (Ax, Bci) representing the distance between said weighted sample vector Ax and said weighted code book vector Bci.
23. A distortion estimation apparats in channel optimized vector quantization, characterized by: means (40) for weighting a sample vector x by a first weighting matrix A; means (44) for weighting a set of code book vectors #r by a second weighting matrix B different from said first weighting matrix A, at least one of said weighting matrices A, B being different from the identity matrix; means (30W) for forming a set of distance mesures dw (Ax, Bd,) representing the distance between said weighted sample vector Ax and each weighted code book vector BER; means (34W) for forming a distortion measure by multiplying each distance measure by a predetermined channel transition probability prli that an index r has been received at a decoder when an index i has been sent from an encoder and adding together said multiplie distance mesures for each possible index r.
24. A channel optimized vector quantization apparats, characterized by: means (40) for weighting a sample vector x by a first weighting matrix A; means (44) for weighting a set of code book vectors #r by a second weighting matrix B different from said first weighting matrix A, at least one of said weighting matrices A, B being different from the identity matrix; means (30W) for forming a set of distance mesures {dw (Ax, B#r)} representing the distance between said weighted sample vector Ax and each weighted code book vector BER; means (34W) for forming a set of distortion mesures {a ; (x)} by multiplying each distance measure by a predetermined channel transition probability prli that an index r has been received at a decoder when an index i has been sent from an encoder and adding together said multiplie distance mesures for each possible index r; <BR> <BR> <BR> <BR> means (36W) for determining an index mimi, corresponding to the smallest<BR> <BR> <BR> <BR> <BR> <BR> distortion measure ai (x) and representing said sample vector by this index iman.
25. A distortion estimation apparats in channel optimized vector quantization, characterized by means (34W) for calculating a distortion measure αi'(x) in accordance with the expression 2xTATB#µi+1T(#i#(BTB))1αi'(x)= where x is a sample vector, A is a sample vector weighting matrix, B is a code book vector weighting matrix, at least one of said weighting matrices A, B being different from the identity matrix, 4i is an expected i: th reconstruction vector, #i is a conditional i: th code book correlation matrix. T denotes transposition, 1 denotes a vector consisting of all ones, and X denotes elementwise multiplication.
26. A channel optimized vector quantization apparats, characterized by means (34W) for calculating a set of distortion mesures {αi'(x)} in accordance with the expression 2xTATB#µi+1T(#i#(BTB))1αi'(x)= where x is a sample vector, A is a sample vector weighting matrix, B is a code book vector weighting matrix, at least one of said weighting matrices A, B being different from the identity matrix, pti is an expected i: th reconstruction vector, (Di is a conditional i: th code book correlation matrix. T denotes transposition, 1 denotes a vector consisting of all ones, and <BR> <BR> <BR> denotes elementwise multiplication; and<BR> <BR> <BR> <BR> means (36W) for determining an index imin corresponding to the smallest<BR> <BR> <BR> <BR> distortion measure ai' (X) and representing said sample vector by this index iman.
27. The apparats of claim 25 or 26, characterized by means (46,48) for storing the precomputed quantities2pi and (Di and for <BR> <BR> <BR> retrieving these stored quantities each time a distortion measure a ;' (x) is to be calculated for a given sample vector x.
Description:
DISTANCE AND DISTORTION ESTIMATION METHOD AND APPARATS IN CHANNEL OPTIMIZED VECTOR QUANTIZATION TECHNICAL FIELD The present invention relates to a distance and distortion estimation method and apparats in channel optimized vector quantization. The invention also relates to an encoding method and apparats based on these estimates.

BACKGROUND OF THE INVENTION Vector quantization (VQ) is a data encoding method in which a sample vector consisting of several samples is approximated by the"nearest"vector of a collection of vectors called a code book. Instead of representing the sample vector by its components, it is represented by the code book index of this"nearest"code book vector. This index is transmitted to a decoder, which uses the index to retrieve the corresponding code book vector from a copy of the code book. Vector quantization is used in, for example, speech coding in mobile telephony.

A common distance or distortion measure (see ref, [1]) to determine the"nearest" code book vector is the squared Euclidean distance between sample vector and code book vector.

Another propose, more complex distance or distortion measure (see ref. [2]) is the perceptually weighted squared Euclidean distance, in which errors in low-energy frequency bands are over-weighted while errors in high-energy bands are under- weighted. The effect is that errors in high-energy parts of a signal tend to be allowed (since the high energy will mask them anyway), while errors in low-energy parts tend to be disallowed (since the error energy would otherwise be a significant part of the total signal energy). The weighting may be performed by a weighting filter, the spectral characteristics of which are essentially the inverse of the spectral characteristics of the signal to be encode. Since the signal characteristics may be time-varying, the weighting filter may also be time-varying (see ref. (2]).

A drawback of these methods is that the transmitted index may, due to the influence of the transmission channel, not always be the same as the received index. In these cases, the actually decoded vector may differ significantly from the original sample vector. The weighted squared Euclidean distance has the further drawback that the weighting filter is sometimes determined in a feedback loop, which implies that a received error may influence the weighting filter and therefore the decoded signal for a long time.

An often used approach to reduce the sensitivity to channel errors is to apply forward error correction coding (FEC). In this way the decoder may detect and even correct errons that occurred during transmission before code book lookup. However, a drawback of this method is that redundancy has to be introduced in the code words that are transmitted over the channe. Furthermore, this method requires very long codes in order to give an acceptable error rate performance. A common way to obtain such long code words is to collect indices from several vector quantized sample vectors before the FEC coding is performed. This collecting process results in a substantial delay, which is in general undesirable in real time applications, such as mobile telephony, video and audio transmission.

An alternative approach to error protection is channel optimized vector quantization (COVQ) (see ref. [3]). Instead of protecting the transmitted index against channel <BR> <BR> <BR> <BR> errors, COVQ takes into account the statistical properties of the channel already in<BR> <BR> <BR> <BR> <BR> <BR> <BR> the code book construction. The idea behind COVQ is that although the wrong code book index may have been received, the decoded code book vector should still be <BR> <BR> <BR> <BR> "close"to the original sample vector. A characteristic feature of COVQ is that the number of indices that may be transmitted often is actually smaller than the number of indices that may be received. In this way, the receiver code book may contain vectors"in between"sample vectors corresponding to actually transmitted indices. A channel error may therefore still result in a decoded vector that is"close"to the <BR> <BR> <BR> <BR> intended vector. Thus, COVQ offers a jointly optimized vector quantization and channel protection system. Since long code words are not required, the extra delay <BR> <BR> <BR> introduced by FEC coding may be avoided. However, a drawback of COVQ is that it

is very computationally intense. Therefore distance and distortion mesures have been based on the simple squared Euclidean distance and not on the more complex but preferable perceptually weighted distance mesure.

SUMMARY OF THE INVENTION It is an object of the present invention to provide a distance and distortion estimation method and apparats in channel optimized vector quantization that provides increased robustness without the delays that are traditionally associated with channel coded vector quantization indices, preferably without significantly increased complexity.

Another object of the invention is a channel optimized vector quantization encoding method and apparats that uses these new distance and distortion estimates for more robust encoding.

These objects are achieved by methods and apparats in accordance with the accompanying claims.

Briefly, the present invention achieves the above object by weighting the sample vector and the code book vectors before distance and distortion mesures are calculated by using the weighted vectors. In a preferred embodiment, the complexity of the weighting process is significantly reduced by pre-computing and storing essential quantities.

BRIEF DESCRIPTION OF THE DRAWINGS The invention, together with further objects and avantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which: FIG. 1 is a block diagram of a channel protected vector quantization system; FIG. 2 is a block diagram of a channel optimized vector quantization system;

FIG. 3 is a more detailed block diagram of a channel optimized vector quantization system; FIG. 4 is a block diagram of an embodiment of a channel optimized vector quantization system in accordance with the present invention; FIG. 5 is a flow chart illustrating the encoding process in a channel optimized vector quantization system in accordance with the present invention; FIG. 6 is a block diagram of a preferred embodiment of a channei optimized vector quantization system in accordance with the present invention; and FIG. 7 is a flow chart illustrating the encoding process in a channel optimized vector quantization system in accordance with a preferred embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS In the following description, the same reference designations are used for elements with the same or similar funciions.

Before the invention is described in detail, a short summary of channel protected vector quantization and channel optimized vector quantization will be given with reference to figures 1-3.

Fig. 1 is a block diagram illustrating the principles of a channel protected vector quantized communication system. A vector source 10 outputs sample vectors to a VQ encoder 12. VQ encoder 12 searches a code book 14 containing a collection of code book vectors to find the"closest"match. The index i of this code book vector is forwarded to a channel encoder 16 that provides this index with error protection. The protected index is forwarded to a modulator 18 and transmitted over a noisy channel.

The received signal is demodulated in a demodulator 20. As indicated modulator 18, demodulator 20 and the noisy channel together form a digital channe. The demodulated signal is channel decoded in a channel decoder 22, and a received index r is forwarded to a VQ decoder 24. VQ decoder 24 is a simple lookup table that retrieves a code vector corresponding to index r from a copy of code book 14. The fact that identical code books are used in encoder and decoder has been indicated by the

dashed line from code book 14 to decoder 24. Finally the retrieved code book vector is forwarded to a user 26.

Fig. 2 is a block diagram illustrating the principles of a channel optimized vector quantized communication system. A vector source 10 outputs sample vectors to a COVQ encoder 13. COVQ encoder 12 uses a COVQ code book 28 containing a large collection of code book vectors to find the"ciosest"match (in accordance with a distortion measure further described below). An index i characterizing the quantized sample vector is forwarded to a modulator 18 and transmitted over a noisy channel.

The received signal is demodulated in a demodulator 20. The received and demodulated index r is forwarded to a COVQ decoder 25. COVQ decoder 25 is a simple lookup table that retrieves a code vector corresponding to index r from a copy of code book 28. Finally the retrieved code book vector is forwarded to a user 26.

As should be apparent from the above, an essential quantity in vector quantization in general is the"closeness"or"distance"d (x, ci) between a sample vector x and a code book vector ci. A common distance estimate is the squared Euclidean distance measure dxi) =Ilx-lll2 In vector quantization (VQ) this measure is usually used to select the code book vector ci that best matches a given sample vector x.

In channel optimized vector quantization (COVQ) this distance measure may be used to calculate a collection of distortion mesures ai (x) according to where E [.] denotes expected value, R, l are stochastic variables, M is the number of vectors in the COVQ code book, cr is a COVQ code book vector corresponding to index r, and p, ii is the conditional channel transition probability that code book index r <BR> <BR> <BR> was received when index i was actually sent over the channe. In other words, ai (x) represents the expected decoding error or distortion of a sample vector x that has been vector quantized (encode) to index i. In channel optimized vector quantization the <BR> <BR> <BR> index i giving the smallest distortion ai (x) for a given sample vector x is selected as the encoding index to be transmitted.

As noted above, the conditional channel transition probabilities prli are required to calculate the expected distortions. For a binary symmetric channel the conditional channel transition probabilities prli may be calculated as #dH(r,i)(1-#)N-dH(r,i)pr#i= where N denotes the number of bit positions in an index, dH(j,i) denotes the Hamming <BR> <BR> <BR> distance (the number of differing bits) between j and i, and £ denotes the bit error rate (BER) of the channe.

Fig. 3 illustrates the"matching"process in the COVQ encoder in more detail. The <BR> <BR> <BR> distances d (X, Cr) of a sample vector x from vector source 10 to each of the vectors Cr of the code book 28 are calculated in a distance calculator 30. These distances are multiplie by corresponding channel transition probabilities prli stored in a storage block 32. The products are formed and accumulated in a distortion calculator 34, which forms a set of distortion mesures ai (x). Block 36 finds the index i of the distortion measure in the set that has the smallest value. This is the index that will be transmitted.

In accordance with an important aspect of the present invention the simple distance measure d (x, Ci) above is replace by a more general class of weighted distance mesures dwx, c,=d (Ax, Bcl) where A is a dx11 weighting matrix and B is a dxl2 weighting matrix such that

Here lr is the number of samples in sample vector x, while 12 is the number of components in code book vector cj. Thus, this more general weighted distance measure allows for sample vectors and code book vectors having a different number of dimensions. The weighted vectors Ax and Bci, however, have the same number of dimensions, namely d. In general the weighting matrices A, B may depend on sample vector x and/or on time. Furthermore, at least one of the matrices A, B should be different from the identity matrix for at least one combination of x, di (otherwise there would not be any weighting).

In a preferred embodiment of the present invention the distance measure dw (xgci) comprises the weighted squared Euclidean distance measure or norm -B#r#2dw(x,#r)=#Ax Other norms are, for example, the H61der norm or the Minkowsky norm <BR> <BR> =max#(Ax)k-(B#r)k#dw(x,#r) <BR> 0#k#d-1 From the above examples it is clear that the weighted distance or error measure according to the present invention does not have to fulfill all the requirements of a mathematical definition of a norm. The preferred weighted distance mesure, for example, does not fulfill the triangle inequality.

Furthermore, if certain restrictions are imposed on A or B the following special cases are also obtained.

If A and/or B do not depend on the sample vector x, they do not have to be calculated for each new sample vector, but may be stored.

If either A or B (but not both) equals the identity matrix, the corresponding matrix multiplication may be omitted, giving either dw (x, Cr) =d (x, Ber) or dw (x, er) =dw (Ax, Cr).

If A=B one obtains dw (x, dr) =d (AxsAcr). For a weighted squared Euclidean distance measure this reduces to #A(x-#r)#2 This measure may be useful when vector quantizing line spectral frequencies (LSF).

If A and B are diagonal the complexity of the matrix multiplication is significantly reduced.

The sample vectors and code book vectors may be of the same length d. In this case A, B are square matrices.

Combinations of the above special cases are of course also possible. For example. matrices A and B may be constant, equal and diagonal.

Since the weighting matrices A, B are essential for the present invention, a few examples on the calculation of these matrices will now be given below.

A suitable weighting for vector quantization of LSF parameters is A=B=W, where W is a diagonal matrix. The diagonal elements wi of W may be calculated according to the equation wi = P (xi), where P denotes the power spectrum of the synthesis filter that corresponds to the line spectral frequencies toj and 6 is a constant such that 0<o<1.

Since the elements of W depend on the synthesis filter, which is updated on a frame by frame basis, matrix W will be time dependent.

In speech coding it is suitable to employ error weighting through filtering. In this case the weighting matrices A and B are computed from the impulse response h (0), h (1),.... h (M-1) of the filter. Linear filtering with such a filter is equivalent to multiplying with the matrix

The weighting matrices A and B are set equal to H for this type of weighting. In general A and B are built from different impulse responses. For code book search in CELP coding (CELP = Code Excited Linear Predictive). A is given by the impulse response of the perceptual weighting filter, whereas B is given by the impulse response of the cascade of the perceptual weighting filter and the synthesis filter. The weighting matrices A, B will be time dependent (or input data dependent), since the filters are updated for each speech (sub) frame.

In accordance with a further important aspect of the present invention the weighted distance measure may used to calculate a new distortion measure according to

Fig. 4 is a block diagram of an embodiment of a channel optimized vector quantization system in accordance with the present invention. A weighting matrix A from a matrix storage block 38 is forwarded to a weighting unit 40 together with a sample vector x from vector source 10. Similarly, a weighting matrix B from a matrix storage block 42 is forwarded to a weighting unit 44 together with code book vectors from COVQ code book 28. The weighted vectors from weighting units 40,44 are forwarded to a weighted distance calculator 30W. The weighted distance mesures dw (x, cr) are forwarded to a distortion calculator 34W that calculates a set of (weighted) distortion mesures. Block 36W selects the distortion measure in the set that has the smallest value and transmit

the corresponding index i over the digital channel. Blocks 30W, 32W, 34W, 36W, 40, 44 are preferably implemented by one or several micro/signal processor combinations.

If weighting matrix B is independent of sample vectors x (and time), a weighted code book may be pre-computed and stored in the encoder. In this case, blocks 42 and 44 may be omitted.

Fig. 5 is a flow chart illustrating an embodiment of an encoding process in a channel optimized vector quantization system in accordance with the present invention. In step S1 all code book vectors are weighted by weighting matrix B. In step S2 the current sample vector x is weighted by weighting matrix A. In step S3 the distance, for example the squared Euclidean distance, between the weighted sample vector and each weighted code book is calculated. As noted above it is a characteristic feature of COVQ that the number of indices that may be transmitted may actually be smaller than the number of indices that may be received. These indices that may be transmitted are called active indices and are determined during the training of the encoder as explained in [3). In the search for the best index to transmit, it is therefore only necessary to consider active indices. Step S4 initializes a search loop by setting a search index i and a variable imin to the first active index io, and by setting a variable<BR> amin to the distortion a for this index io. The loop starts in step S5 by testing whether this is the last active index. If not, step S6 is performed, in which the loop variable is updated to the next active i. Step S7 calculates the corresponding distortion ai. In step<BR> S8 the calculated distortion is compare to the current minimum distortion amis. If the<BR> calculated distortion ai S less than the current minimum distortion, step S9 updates the<BR> variables imin and ctmin. Otherwise step S9 is omitted. The loop then returns to step S5.

When all the active indices have been searched, the loop exits to step S10, in which the final value of variable imin S transmitted. Thereafter step Sol 1 gets the next sample vector. If A and/or B depend on x or time they are updated in step S12, if appropriate.

Step S13 tests whether weighting matrix B has change. If so, the process returns to step S1 for weighting the code book vectors by the new weighting matrix B. Otherwise step S1 can be omitted, since the previously calculated weighted code book vectors are still valid, and the process returns to Step S2 instead..

In accordance with a preferred embodiment the weighted squared Euclidean distance measure dw (x, #r)=#Ax-B#r#2 may be used to calculate the distortion mesures (xi (x) used in COVQ as Ax-2xTATB#E[#R#I=i]+E[#RTBTB#R#I=i]=xTAT <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> Ax-2xTATB#µi+#i(B)xTAT <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> <BR> where ti denotes the i: th expected reconstruction vector, which may be pre-computed and stored by using COVQ code book 28 and the channel transition probabilities poli, and where (Ti (B) denotes an i:th code book variance mesure, which depends on the choice of B. The first term in this expression is independent of the selected index i, and therefore this term may be omitted, since it only represents a common offset and does <BR> <BR> <BR> not influence the relative magnitudes and ordering of the distortions as. Thus, it is sufficient to calculate the modifie distortion measure -2xTATB#µi+#i(B)αi'(x)= In this expression the second term (yj (B) is the most computationally intense. However, it involves an expression of the form cTBTBc, where c is a vector and B is a matrix.

Such an expression may be rewritten as O01c0c1.+O0M-1c0cM-1O00c0c0+ + OloCiCo + O11c1c1 . +O1M-1c1cM-1 +OM-11cM-1c1...+OM-1M-1cM-1cM-1=+OM-10cM-1c0 CoCo CoCl CoCM_I OOO OOl OOM-I CICO CICI clcm-l 010 All : : : elementscM_IcpM-11M-lM-1M-10M-1M-i D00O1 DOM-1 (1 1... 1 0 (co Cl... CM-1) X °10. = M-1 M-l O OM-I M-I where (D denotes elementwise multiplication and 1 represents a vector consisting of all ones. Thus, remembering that B is independent of i one obtains where

is denoted the i: th conditional code book correlation matrix. Thus, the modifie distortion measure ai' (x) may be written as Since µi and fi oniy depend on the COVQ code book and the channei transition probabilities, these quantities may be pre-computed and stored, which significantly reduces the complexity of the calculation of the modifie distortion mesures.

Furthermore, if certain restrictions are imposed on A or B the following special cases may be obtained.

If A and/or B do not depend on the sample vector x (or time) they do not have to be updated for each new sample vector. In the special case where both A and B are constant the vector quantities-2AB.ti and the scalar quantities 1T(#i# (BTB)) 1 may be pre-computed and stored. If only B is constant the vector quantities-2B. µi and the scalar quantities 1T(#i#(BTB))1 may be pre-computed and stored.

If either A or B (but not both) equals the identity matrix, the corresponding matrix multiplication may be omitted. If B equals the identity matrix, the second term is reduced to the constant scalar quantities 1T#i1=diag(#i), which may be pre-computed and stored.

If A=B the complexity is reduced, since BTB has to be calculated only for one of the terms in ai' (X) and may be reused for the other.

If A and/or B are diagonal the complexity of the matrix multiplication is significantly reduced. If A is diagonal the first term in ai (x) is simplified, since xTAT reduces to an inner product instead of a matrix multiplication. If B is diagonal BTB will also be

diagonal, which means that the second term in aj' (x) will only require the diagonal elements of q) i. This reduces the storage requirements for q) i and also the complexity of the calculation of the second term.

Combinations of the above special cases are of course also possible. For example, matrices A and B may be constant, equal and diagonal.

Fig. 6 is a block diagram of a preferred embodiment of a channel optimized vector quantization system in accordance with the present invention. In this embodiment pre- calculated expected reconstruction vectors ti and conditional correlation matrices cep ; are stored in storage blocks 46 and 48, respectively. These quantities may be said to replace encoder code book 14 and channel transition probabilities storage block 32 of the embodiment in fig. 4 (the code vectors and transition probabilities are of course essential for the pre-computation of these quantities, as outlined above). Together with sample vectors from vector source 10 and weighting matrices A and B from blocks 38 and 42, a set of distortions is calculated in distortion calculator 34W. Block 36W selects the distortion measure in the set that has the smallest value and transmit the corresponding index i over the digital channe. It is to be noted that also in this embodiment a decoder code book 28 is still used for lookup on the decoding side. Blocks 34W, 36W are preferably implemented by one or several micro/signal processor combinations.

Fig. 7 is a flow chart illustrating an example of an encoding process in a channel optimized vector quantization system in accordance with a preferred embodiment of the pèsent invention. In step S20 the second term (involving weighting matrix B, but not the sample vector x) of all the distortions ai'are calculated using the pre-computed quantities i. Step S21 initializes a search loop by setting a search index i and a variable imin to the first active index io, and by setting a variable a'min to the modifie distortion for this index io. The loop starts in step S22 by testing whether this is the last active index. If not, step S23 is performed, in which the loop variable is updated to the next active i. In step S24 the first term of the corresponding distortion ai'sep S25<BR> calculates the distortion ai'boy adding this first and its corresponding second term (calculated in step S20). In step S27 the calculated distortion is compare to the

current minimum distortion a'min-If the calculated distortion (xi'is less than the current<BR> minimum distortion, step S28 updates the variables i, , in and a'min. Otherwise step S28 is omitted. The loop then returns to step S22. When all the active indices have been searched, the loop exits to step S29, in which the final value of variable iman ils transmitted. Thereafter step S30 gets the next sample vector. If A and/or B depend on x or time they are updated in step S31, if appropriate. Step S32 tests whether weighting matrix B has change. If so, the process returns to step S20 for updating the second term of all distortions with the new weighting matrix B. Otherwise step S20 can be omitted, since the previously calculated second terms are still valid, and the process returns to Step S21 instead.

In the above description, a digital channel has been assume. However, a basic weighted distance measure along the same principles may also be introduced for analog output channels, if the channel transition probabilities are replace by channel transition density functions and summation is replace by integration.

The new distance/distortion mesures and encoding method in according with the present invention provide channel optimized data dependent quantization and gives robustness at low delays.

The preferred embodiment of the invention achieves this at substantially reduced computational complexity, especially if special structures of the weighting matrices A and B may be exploite.

It will be understood by those skilled in the art that various modifications and changes may be made to the present invention without departure from the spirit and scope thereof, which is defined by the appende claims.

REFERENCES 1. Y. Linde, A. Buzo and R. M. Gray,"An algorithm for Vector Quantizer Design", IEEE Trans. Communication, Vol. COM-28, pp 84-95, January 1980.

2. International Telecommunication Union,"Coding of Speech at 16 kbit/s Using Low-Delay Code Excited Linear Prediction", Recommendation G. 728, Geneva, 1992.

3. Nariman Farvardin, Vinay Vaishampayan,"On the Performance and Complexity of Channel-Optimized Vector Quantizers", IEEE Transaction on Information Theory, Vol. 37, No. 1, pp 155-60, January 1991.