Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
METHOD FOR DETERMINING UPDATED FILTER COEFFICIENTS OF AN ADAPTIVE FILTER ADAPTED BY AN LMS ALGORITHM WITH PRE-WHITENING
Document Type and Number:
WIPO Patent Application WO/2010/027722
Kind Code:
A1
Abstract:
The application relates to a method for determining at least one updated filter coefficient of an adaptive filter (22) adapted by an LMS algorithm. According to the method, filter coeffi-cients of a first whitening filter (25') are determined, in particular filter coefficients of an LPC whitening filter. The first whitening filter (25') generates a filtered signal. A normaliza-tion value is determined based on one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter (25'). The normalization value is associated with the energy of the filtered signal. At least one updated filter coefficient of the adaptive filter (22) is determined in dependency on the filtered signal and the normaliza-tion value. Preferably, updated filter coefficients for all filter coefficients of the adaptive filter (22) are determined.

Inventors:
ANDERSEN ROBERT L (US)
DAVIDSON GRANT A (US)
Application Number:
PCT/US2009/054726
Publication Date:
March 11, 2010
Filing Date:
August 24, 2009
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
DOLBY LAB LICENSING CORP (US)
ANDERSEN ROBERT L (US)
DAVIDSON GRANT A (US)
International Classes:
H03H21/00; G10L19/00; H04B3/23; H04M9/08
Foreign References:
US6163608A2000-12-19
US20040252826A12004-12-16
US6246760B12001-06-12
EP1022866A12000-07-26
Other References:
SCOTT C DOUGLAS ET AL: "Self-Whitening Algorithms for Adaptive Equalization and Deconvolution", IEEE TRANSACTIONS ON SIGNAL PROCESSING, IEEE SERVICE CENTER, NEW YORK, NY, US, vol. 47, no. 4, 1 April 1999 (1999-04-01), XP011058503, ISSN: 1053-587X
M MBOUP ET AL.: "LMS Coupled Adaptive Prediction and System Identification: A Statistical Model and Transient Mean Analysis", IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 42, no. 10, October 1994 (1994-10-01)
SCOTT D. DOUILAS ET AL.: "Self-Whitening Algorithms for Adaptive Equalization and Deconvolution", IEEE TRANSACTIONS ON SIGNAL PROCESSING, vol. 47, no. 4., 1 April 1999 (1999-04-01)
Attorney, Agent or Firm:
ANDERSEN, Robert, L. et al. (999 Brannan StreetSan Francisco, CA, US)
Download PDF:
Claims:
CLAIMS

1. A method for determining at least one updated filter coefficient of an adaptive filter (22) adapted by an LMS algorithm, the method comprising the steps of: determining filter coefficients of a first whitening filter (25') outputting a filtered signal; determining a normalization value based on one or more computed values obtained in the course of determining the filter coefficients of the first whitening fil- ter (25'), the normalization value associated with the energy of the filtered signal; and

- determining at least one updated filter coefficient of the adaptive filter (22) in dependency on the filtered signal and the normalization value.

2. The method of claim 1 , wherein the first whitening filter is an LPC whitening filter

(25') outputting a residual signal as the filtered signal.

3. The method of any of claims 1-2, wherein the step of determining the normalization value comprises: - determining an energy estimate of the filtered signal based on the one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter (25').

4. The method of claim 3, wherein the step of determining the normalization value fur- ther comprises: inverting the energy estimate.

5. The method of any of claims 3-4, wherein the at least one filter coefficient is determined based on the following LMS filter coefficient updating equation: wherein h} (n + 1) denotes the updated at least one filter coefficient, with i e [0,1,...,N - 1] and N indicating the number of filter coefficients of the adaptive fil- ter, Λ, (ji) denotes the actual at least one filter coefficient, rx denotes the filtered signal, rc denotes an error signal and E1 denotes the energy estimate.

6. The method of any of claims 3-5, wherein the first whitening filter is an LPC whiten- ing filter (25') outputting a residual signal as the filtered signal, and wherein the filter coefficients of the LPC whitening filter (25') are determined based on a method essentially corresponding to the autocorrelation method in combination with Durbin's algorithm for solving autocorrelation equations.

7. The method of claim 6, wherein the energy estimate corresponds to a squared prediction error E1 of an iteration j of Durbin's algorithm.

8. The method of any of claims 6-7, wherein the energy estimate corresponds to the squared prediction error EAI of the last iteration of Durbin's algorithm.

9. The method of claim 8, wherein the squared prediction error Eu is determined based on the squared prediction error £Λ/_, of the penultimate iteration of Durbin's algorithm.

10. The method of claim 9, wherein the energy estimate is determined by two multiply- accumulate operations.

1 1. The method of any of the preceding claims, wherein the filter coefficients of the first whitening filter (25') are adaptively updated.

12. The method of claim 1 1 , wherein an updated normalization value is determined each time the filter coefficients of the first whitening filter (25') are updated.

13. The method of claim 12, wherein the filter coefficients of the first whitening filter (25') and the normalization value are updated at an update rate lower than the sampling rate of an input signal upstream of the adaptive filter (22) and the first whitening filter (25').

14. The method of claim 13, wherein the statistical characteristics of the input signal are essentially stationary during the update period of the normalization value.

15. The method of any of claims 1-12, wherein the step of determining a normalization value and the step of determining the filter coefficients of a first whitening filter (25') are carried out once and thereafter the normalization value and the filter coefficients of the first whitening filter (25') remain fixed.

16. The method of any of the preceding claims, wherein the adaptive filter (22) receives the filtered signal.

17. The method of any of claims 1-10, wherein the adaptive filter (22) receives a signal from upstream of the first whitening filter (25'), and - the at least one updated filter coefficient of the adaptive filter (22) is determined further in dependency on an error signal filtered by a second whitening filter (26) having the same filter coefficients as the first whitening filter (25').

18. The method of any of the preceding claims, wherein updated filter coefficients for all filter coefficients of the adaptive filter (22) are determined.

19. The method of any of claims 3-10, wherein the energy estimate is determined during determining the filter coefficients of the first whitening filter (25').

20. An apparatus for determining at least one updated filter coefficient of an adaptive filter (22) adapted by an LMS algorithm, the apparatus comprising: means (25') for determining filter coefficients of a first whitening filter (25') out- putting a filtered signal and for determining a normalization value based on one or more computed values obtained in the course of determining the filter coeffi- cients of the first whitening filter (25'), the normalization value associated with the energy of the filtered signal; and

- an update stage (27') for determining at least one updated filter coefficient of the adaptive filter (22) in dependency on the filtered signal and the normalization value.

21. The apparatus of claim 20, wherein the means (25') for determining filter coefficients are configured to determine filter coefficients of an LPC whitening filter (25') out- putting a residual signal as the filtered signal.

22. The apparatus of any of claims 20-21, wherein the means (25') for determining the normalization value are configured to determine an energy estimate of the filtered signal based on the one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter (25').

23. The apparatus of claim 22, wherein the means (25") for determining filter coefficients are configured to determine filter coefficients of an LPC whitening filter (25') out- putting a residual signal as the filtered signal, and wherein the means for determining the filter coefficients of the LPC whitening filter are configured to determine the fil- ter coefficients based on a method essentially corresponding to the autocorrelation method in combination with Durbin's algorithm for solving autocorrelation equations.

24. The apparatus of claim 23, wherein the energy estimate corresponds to the squared prediction error EιU of the last iteration of Durbin's algorithm.

25. The apparatus of any of claims 20-24, wherein the means (25') for determining a normalization value are configured to determine an updated normalization value each time when the filter coefficients of the first whitening filter (25') are updated.

26. The apparatus of claim 25, wherein the filter coefficients of the first whitening filter (25') and the normalization value are updated at an update rate lower than the sampling rate of an input signal upstream of the adaptive filter (22) and the first whitening filter (25').

27. A filter system comprising: an adaptive filter (22) adapted by an LMS algorithm; - a first whitening filter (25') outputting a filtered signal; means (25') for determining filter coefficients of the first whitening filter and for determining a normalization value based on one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter, the normalization value associated with the energy of the filtered signal; and - an update stage (27) for determining at least one updated filter coefficient of the adaptive filter in dependency on the filtered signal and the normalization value.

28. The filter system of claim 27, wherein the adaptive filter (22) receives the filtered signal.

29. The filter system of claim 27, further comprising a second whitening filter (26) having essentially the same filter coefficients as the first whitening filter (25'), wherein the updating stage (27) receives an error signal filtered by the second whitening filter (26).

30. The filter system of any of claims 27-29, wherein the adaptive filter (22) is a FIR filter.

31. The filter system of any of claims 27-30, wherein the means for determining filter coefficients of the first whitening filter and for determining a normalization value form a common unit (25'), the common unit transmitting the determined normalization value to the update stage.

32. A software program comprising instructions for performing the method according to any of claims 1-19 when the software is executed. 33. A method for filtering a signal by an adaptive filter (22), the adaptive filter (22) adapted by an LMS algorithm, wherein at least one updated filter coefficient of the adaptive filter (22) is determined according to the method of any of claims 1-19.

Description:
METHOD FOR DETERMINING UPDATED FILTER COEFFICIENTS OF AN

ADAPTIVE FILTER ADAPTED BY AN LMS ALGORITHM WITH PRE-

WHITENING

CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims priority to United States Patent Provisional Application No. 61/091,527, filed 25 August 2008, hereby incorporated by reference in its entirety.

FIELD OF THE INVENTION

The patent application relates to system identification, in particular to a method for determining updated filter coefficients of an adaptive filter adapted by an LMS algorithm.

SUMMARY OF THE INVENTION

System identification is based on algorithms that build mathematical models of the dynamic behavior of a system or process from measured data. A common approach to system identification is to measure the output behavior of an unknown system in response to system inputs for determining a mathematical relation between the output behavior and the system inputs. This can often be done without going into the details of what is actually happening inside the system.

An unknown linear system can be completely characterized by its impulse response. However, since most real -world systems have infinite impulse responses (i.e. an impulse response with infinite length), it is common practice to model such systems using a finite- length approximation of the infinite impulse response. Because such finite approximation cannot exactly model a system having an infinite impulse response, the output behavior of the real- world system and of the model with finite approximation typically differ when the real-world system and the model are stimulated by the same input signal. Fig. 1 illustrates such modeling of an unknown system 1 by a system model 2. Both unknown system 1 and system model 2 are stimulated by the same input signal-. For determining an error between the actual and modeled outputs, the output response of system model 2 is subtracted from the output response of unknown system 1. When the accuracy of approximation increases, the error between the actual and the modeled outputs decreases. In general, this method of ap- proximation works well because the energy in the infinite impulse response of a real world system decreases with time, and eventually becomes negligible.

Many methods exist for approximating the impulse response of an unknown system. One class of method is direct measurement of the impulse response. While this method can be very effective, it becomes less attractive if the response of the unknown system changes over time. This is due to the fact that new measurements are required when the characteristics of the unknown system change. If the characteristics of the unknown system change frequently enough, it becomes impractical to perform new measurements for every change. An alternative to direct measurement is automated measurement. Automated meas- urement algorithms are well-suited for characterizing systems that change over time because they can detect when the characteristics of the unknown system change, and determine new characteristics in response to the change.

A popular automated algorithm for approximating the impulse response of an unknown system is the LMS (least mean squares) algorithm used in adaptive filters. The LMS algorithm was originally proposed by Bernard Widrow and Ted Hoff in 1960 and is e.g. described in the textbook "Adaptive Signal Processing", B. Widrow, S. D. Stearns, Prentice- Hall, Englewood Cliffs, NJ, 1985 and on the Wikipedia webpage

"http://en.wikipedia.org/wiki/Least_mean_squares_filter". These descriptions of the LMS algorithm are hereby incorporated by reference. DESCRIPTION OF THE INVENTION

Fig. 2 illustrates an adaptive LMS filter system model. Figurative elements in Figs. 1 and Fig. 2 denoted by the same reference signs are basically the same. In Fig. 2, an unknown system 1 and an adaptive filter 2 modeling unknown system 1 are stimulated by an input signal x(n). For determining an error e(n) between the actual and modeled outputs, the out- put signal of filter 2 is subtracted from the output signal of unknown system 1. The updated filter coefficients for adaptive filter 2 are generated by an update stage 3 in response to the error signal e(n), the input signal x(n) and previous filter coefficients as will be discussed in more detail below.

The LMS algorithm identifies the coefficients of finite impulse response (FIR) filter 2 that minimize the means-square error between unknown system 1 and the approximation generated by filter 2.

The adaptive LMS filter uses a stochastic gradient descent algorithm to determine the approximation of unknown system 1. In case of the standard LMS algorithm, during each sampling interval of the input signal x(n), the adaptive FIR filter coefficients are updated using the following recursive equation that drives the filter coefficients in a direction to minimize the mean-square error £{|e(«)| ~ } : h l (n + \) = h l (n) + μ - x(n - i) - e(n) Jor i = 0, 1, ..., N-1. (Eq. 1) In equation 1 , the term Ji 1 (n + \) (with i = 0, 1 , ..., N-I) denotes the updated filter coefficients of filter 2 having N filter coefficients in total. The term /?, («) denotes the actual (i.e. before updating) filter coefficients. The term μ corresponds to a step size. In case of using a complex-valued input signal x(n) and having a complex-valued error signal, the term e(n) in Eq. 1 has to be amended to the conjugate complex term e (n) as shown on the cited webpage "http://en.wikipedia.org/wiki/Least_mean_squares_filter".

Because the adaptive LMS filter uses a recursive algorithm, it does not instantaneously adapt to changes in the unknown system, but rather iteratively converges to an approximation of the new unknown system over a finite time interval. The amount of time re- quired for the adaptive LMS filter to reach a minimum mean-square approximation for the unknown system is commonly referred to as convergence time. In order for the adaptive filter to rapidly respond to changes in unknown system 1 , it is desirable for the convergence time to be as short as possible.

Some common applications which may employ adaptive LMS filtering are echo can- cellation for mobile phones and public switched telephone networks, feed-forward and feedback active noise cancellation, and channel equalization. Fig. 3 illustrates the use of an adaptive LMS filter in an adaptive channel equalization application. Here, the filter coefficients of an adaptive equalizer 12 are adaptively updated by an LMS filter coefficient update stage 13 such that the adaptive equalizer 12 forms the inverse of an unknown channel 1 1. A major problem of the adaptive LMS filter algorithm is that both its ability to converge to the optimal approximation of the unknown system and its convergence time depend on the characteristics of the input signal x(n) to the unknown system. In order to converge to the optimal solution, the input signal x(n) to the unknown system should ideally have a flat spectrum, i.e. the input signal should correspond to white noise. Additionally, in order to minimize the convergence time, the power of the input signal should be as high as possible. As the power of the input signal decreases, the convergence time of LMS adaptive filters increases.

For some system identification applications, it is possible to have complete control over the input signal x(n). For these applications, an idealized input signal is chosen which allows that the adaptive LMS filter converges to the optimal approximation of the unknown system. Unfortunately, in most practical applications of system identification, complete control of the input signal is not possible. For these applications, it cannot be guaranteed that the adaptive LMS filter converges to the optimal approximation of the unknown system. Addi- tionally, for unknown systems that change with time, the adaptive LMS filter may not be able to respond rapidly enough to changes in the unknown system.

Many variations of the standard adaptive LMS filter algorithm have been proposed that attempt to reduce the dependency of the approximation on the input signal x(n). A common variation is the adaptive normalized LMS (NLMS) algorithm. The NLMS algorithm is also described at the already cited Wikipedia web page

"http://en.wikipedia.org/wiki/Least_mean_squares_filter". This description of the NLMS algorithm is hereby incorporated by reference.

The NLMS algorithm is identical to the standard LMS algorithm with the exception that it adds a normalization of the input signal power to the adaptive coefficient update stage. For the NLMS algorithm, the adaptive filter coefficients are updated according to the following equation:

^(^ + l) = ^(^) + ^ ; X ^ " ° " gW , for i = 0, 1, ..., N-1. (Eq. 2)

;=0

In case of using a complex-valued signal x(n), Eq. 2 has to be amended as shown on the cited webpage "http://en.wikipedia.org/wiki/Least_mean_squares_filter" (please see the section dealing with the NLMS algorithm).

In Eq. 2, the input signal energy is determined over a period from t = n - N + 1 to t = n, i.e. the energy of the last N-I samples up to the actual sample x(n) is determined.

The NLMS algorithm provides the advantage that the addition of normalization to the adaptive coefficient update stage decreases the dependency of the convergence time on the input signal power.

Unfortunately, the NLMS algorithm has two main drawbacks:

1. Similar as the LMS algorithm, the first drawback is that it is not guaranteed that the NLMS algorithm converges to the optimal approximation of the unknown system unless the input signal is white noise. This drawback reduces the usefulness of the

NLMS algorithm in applications where the input signal to the unknown system is not completely controlled.

2. The second drawback is that the addition of normalization adds significant computational complexity to the algorithm. When considering an adaptive filter of filter length N (i.e. having N filter coefficients), for determining the normalization term, the sum of the powers of the current input sample and of the previous N-I input samples has to be computed, and then the sum has to be inverted. A brute force calcula- tion of such normalization scaling factor would require N multiply-accumulates (MACs) operations (an MAC operation computes the product of two numbers and adds the product to an accumulator: a + b-c → a) and 1 inversion operation (division) per sample: i) = x 2 (n) + x 2 (n - l) + .. + x 2 (n - N + l) (Eq. 3)

1 st MAC

v 2 nd MAC j

N MAC In practice, however, the number of required operations for computing an updated scaling factor can be reduced to 2 MACs and 1 division per sample: - \) - i) + x 2 (n) - x 2 (n - N) (Eq. 4) By the 1 st MAC operation, the previous sum is added to the energy of the actual sample x(n). By the 2 nd MAC operation, the energy of the last sample x(n-N) in the previous sum is subtracted from this result. It is interesting to note that the complexity of this optimized calculation of the normalizing scaling factor is independent of the adaptive filter length. For improving the convergence behavior of the standard LMS and NLMS algorithms in such a way that the algorithms converge to the optimal approximation of the unknown system, one may use pre-whitening of the input signal. Such pre- whitening of the input signal may be implemented by an LPC (linear prediction coding) whitening filter. Before discussing the next variation of the LMS filter algorithm more in detail, a brief background on linear prediction is given below.

A linear predictor attempts to estimate the next sample of a sequence x(n) from a weighted sum of previous input and output samples according to the following equation:

M N

^{n) = ∑a k - x(n - k) + ^β, - X(H -I) (E ^ 5)

The weighting coefficients a k and P 1 of the predictor are chosen such that they minimize a measure of the error, commonly the mean-square error, between the actual sam- pie x(n) and the predicted sample x{ri) . Determining the optimal weighting coefficients cc k and β, requires solving a non-linear optimization problem and is therefore typically very difficult. However, if all coefficients β, are set to 0, such that the prediction is performed only based on past input samples, the coefficients a k that minimize the mean square error can be determined by solving a linear least-squares problem, for which straightforward methods exist as discussed in the following section. Therefore, FIR linear predictors according to Eq. 5 with all β t set to 0 are more common than HR linear predictors according to Eq. 5 with β ; set to arbitrary values.

Many methods are known for computing the weighting coefficients cc k for an FIR linear predictor and a discussion of some methods can be found in the textbook "Digital Processing of Speech Signals", L. R. Rabiner and R.W. Schafer, Prentice-Hall, Englewood Cliffs, NJ, 1978. It is possible to analyze the signal and compute a fixed set of weighting coefficients a k . However, in order for the predictor to adjust to changing input signal conditions, it is common to recalculate the weighting coefficients a k periodically. Fig. 4 illustrates an LPC whitening filter based on an FIR linear predictor 14 of M th order. The LPC whitening filter computes the difference between the actual input sample x{ή) and the predicted sample x(ή) as generated by linear predictor 14:

M r x (n) = x{ή) - x(n) = x(ή) - ∑cc k x(n - k) (Eq. 6) k=\

The output signal r x (n) of the LPC whitening filter is typically referred to as the residual. For a well designed predictor, in which the prediction coefficients are well matched to the input signal, the sequence of residual samples r x (n) tends to have a much flatter (i.e. whiter) spectrum than the input signal x(n). Unfortunately, the power of the residual signal r x (n) tends to be much lower than the power of the input signal.

As discussed above, the drawback of both the standard LMS and the NLMS algo- rithms is that it is not guaranteed for these algorithms that the algorithms converge to the optimal approximation of the unknown system for non-white input signals. For applications in which the input signal to the unknown system is not completely controlled, the convergence of both algorithms can be improved by pre-whitening the input signal x(n) using an LPC whitening filter. Fig. 5 illustrates an adaptive LMS filter with pre-whitening of the in- put signal x(n) by means of an LPC whitening filter 25. Besides for the actual LPC whitening filter operation as shown in Fig. 4, signal processing block 25 in Fig. 5 is also used for LPC analysis, i.e. for computing the filter coefficients of the LPC whitening filter. In Fig. 5, the residual r x (n) as generated by LPC whitening filter 25 feeds unknown system 21, adaptive filter 22 and filter coefficient update stage 23 used for determining updated filter coefficients of adaptive filter 22. Thus, the adaption of the adaptive LMS filter is based on the re- sidual r x (n) instead of the signal x(n) in Fig. 3. The LMS algorithm in Fig. 5 may be the standard LMS algorithm or the NLMS algorithm. However, because LPC whitening filter 25 tends to lower the power of the signal which is input to the LMS algorithm, the combination of the LPC whitening filter and the standard LMS filter algorithm can suffer from slow convergence. Therefore, it is preferable to use the combination of the LPC whitening filter and the NLMS algorithm for optimal convergence performance. In this case, the filter coefficients are updated according to the following equation: h ι (π + \) ^ h l ( f i) + μ ' ^' ri ~ ' ) ' rΛri) Jor i = 0, 1, ..., N-1. (Eq. 7)

/=0

In case of using a complex-valued signal x(n) and thus having a complex -valued residual signal r x (n), Eq. 7 has to be modified as discussed in connection with Eq. 2. Although pre- whitening of the input signal can improve the convergence performance of the adaptive LMS filter, it has the drawback that it alters the input signal to the unknown system. In some applications, altering the input to the unknown system is undesirable. In particular for these applications, one may move the LPC whitening filter outside of the main signal path so that it does not alter the input to the unknown system and to the adaptive filter. Such approach is described in the document "LMS Coupled Adaptive Prediction and System Identification: A Statistical Model and Transient Mean Analysis", M Mboup et al, IEEE Transactions on Signal Processing, Vol. 42, No. 10, October 1994 and illustrated in Fig. 6. Here, LPC whitening filter 25 is moved into the update path of the LMS filter, i.e. in the path of the update stage 27 parallel to the adaptive filter 22. In Fig. 6 both adaptive filter 22 and unknown system 21 receive the non- whitened signal x(n) from upstream of LPC whitening filter 25, whereas in Fig. 5 both adaptive filter 22 and unknown system 21 receive the filtered signal r x (n). In order to make such alternative system work, a further LPC whitening filter 26 has to be applied to the error signal e(n) so the residual error signal r e (n) after whitening is used as an error signal for the LMS update algorithm. LPC whitening filter 26 is typically identical to LPC whitening filter 25, i.e. the filter coefficients of both filters are identical. Therefore, determination of the filter coefficients of LPC whitening filters 25 and 26 is carried out only in signal processing block 25 (see "LPC analysis" in block 25 of Fig. 6, which is missing in block 26) and the determined filter coefficients of LPC whitening filter 25 are transmitted to LPC whitening filter 26 (transmission of LPC filter coefficients not shown in Fig. 6).

As already discussed in connection with Fig. 5, the LMS algorithm in Fig. 6 may be the standard LMS algorithm or the NLMS algorithm. However, it is preferable to use the combination of the LPC whitening filter and the NLMS algorithm for optimal convergence performance.

While the combination of pre- whitening and the NLMS algorithm as shown in Figs. 5 and 6 is effective in improving the convergence performance of the adaptive filter system, a high computational complexity is associated with both computing of the whitening filter coefficients and of the normalizing scaling factor for the adaptive NLMS filter update.

Therefore, it is an object to provide a method for determining updated filter coefficients of an adaptive filter adapted by an LMS algorithm, with the method preferably allowing reducing the computation complexity for computing the normalization used in the LMS filter update. At least the inventive method should provide an alternative for determining updated filter coefficients, wherein a normalization used in the LMS filter update is computed in an alternative way. Further objects are providing a corresponding apparatus for determining updated filter coefficients, providing a filter system comprising such apparatus, providing software for carrying out the method and providing a corresponding filter method. These objects are achieved by the method, the apparatus, the filter system, the software and the filter method according to the independent claims.

A first aspect of the application relates to a method for determining at least one updated filter coefficient of an adaptive filter adapted by an LMS algorithm. The method may be used to determine filter coefficients of an adaptive filter adapted by an LMS algorithm in general.

According to the method, filter coefficients of a first whitening filter are determined, in particular filter coefficients of an LPC whitening filter. The first whitening filter generates a filtered signal. A normalization value is determined based on one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter. The normalization value is associated with the energy of the filtered signal, e.g. the normalization value may be an energy estimate or the inverse of an energy estimate. At least one updated filter coefficient of the adaptive filter is determined in dependency on the filtered signal and the normalization value. Preferably, updated filter coefficients for all filter coefficients of the adaptive filter are determined. For normalizing, the normalization value may be used as a multiplication factor in a multiplication operation (e.g. in case of using the inverse of an energy estimate as the normalization value). Instead, the normalization value may be used as a divisor in a division operation (e.g. in case of using an energy estimate as the normalizing value without inver- sion).

As discussed above, in case the standard NLMS algorithm is used in combination with pre-whitening, a normalization scaling factor is computed by directly determining the (short-time) signal power of the residual signal.

In contrast, the method makes use of one or more values as already computed by an algorithm for determining the filter coefficients of the whitening filter. Such algorithms for determining filter coefficients of a whitening filter often provide energy estimates or at least values which allow simple computing of the energy estimate for the output signal of the whitening filter as a side effect. One example of such algorithm is the autocorrelation method in combination with Durbin's algorithm for determining filter coefficients of an LPC whitening filter. The method according to the first aspect of the application determines a normalization value based on such one or more already computed values.

Preferred embodiments of the method provide reduced complexity for computing a normalizing scaling factor for an adaptive LMS filter coefficient update as will be explained later on. Typically, the first whitening filter is an LPC whitening filter outputting a residual signal as the filtered signal. An LPC whitening filter typically contains filter coefficients derived from an LPC analysis and outputs a prediction error signal, i.e. the residual signal. Preferably, the used LPC whitening filter is based on an FIR filter structure. In the further examples and embodiments an LPC whitening filter based on an FIR filter structure is pre- ferred, however, other structures for LPC whitening filters may be used.

There are several possible embodiments of LPC whitening filters that do not employ conventional FIR filter structures, such as LPC whitening filters that use either HR filter structures, as discussed previously, or lattice filter structures. For example, the lattice LPC analysis method leads directly to an LPC whitening filter based on a lattice filter structure. This is described in section 8.3.3. and Fig. 8.3 of the already cited textbook "Digital Processing of Speech Signals", L. R. Rabiner and R.W. Schafer, Prentice-Hall, Englewood Cliffs, NJ, 1978. This description is hereby incorporated by reference. Lattice filters can also be used in conjunction with the autocorrelation and covariance methods of LPC analysis. Lattice filters have the convenient property that one can increase the prediction order by adding additional lattice sections onto the lattice filter structure without modifying filter coefficients in the preceding stages. In comparison to conventional FIR filters, the lattice structure is a different way of representing an FIR filter. A conventional FIR filter and a filter based on the lattice structure can be designed to return the same output sample for a given input sample. It should be noted that the method relates also to whitening filters other than an LPC whitening filter. The method may use a Wiener filter as a whitening filter. Wiener filters include LPC as a special case when the form of the Wiener filter is an FIR structure.

According to a preferred embodiment, a normalizing value is derived from an energy estimate of the filtered signal obtained during the determination of the filter coefficients of the whitening filter. Thus, during calculation of LPC whitening filter coefficients, an estimate of the energy of the residual signal may be made, which is then inverted and used as a normalizing factor in an adaptive LMS coefficient update. Such energy estimate may be very similar (or even identical) to the denominator in Eq. 7.

If the estimate of the energy of the residual signal is similar to the actual energy of the residual signal (see denominator in Eq. 7), and the cost of computing the estimate is lower than the cost of computing the actual energy of the residual signal, a complexity reduction is achieved without a significant reduction in performance.

Accordingly, for determining the normalization value, the method preferably determines an energy estimate of the filtered signal based on such one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter. The energy estimate may directly correspond to such computed value or may be determined by one or more computing operations. Then, the energy estimate may be inverted. The inverted energy estimate may be then used as a multiplication factor in a multiplication operation. Instead, the energy estimate may be used as a divisor in a division operation. Such energy estimate may differ from the actual energy in particular due to the fact that the energy estimate is updated at an update rate lower than the sampling rate of an input signal.

Preferably, the filter coefficient update equation is as follows: ft> + iM>) + ^ (w : f) - r ' (>?) (Eq. 8)

E 1 Here, / e [0, 1, ...,N - 1] corresponds to the number of the updated filter coefficient and N indicates the total number of filter coefficients of the adaptive filter. In case all N filter coefficients of the adaptive filter are updated, Eq. 8 is computed for / = 0, l, ...,iV - l . The term A, {n + 1) denotes the updated filter coefficient; the term h : (n) denotes the corresponding actual filter coefficient, r r denotes the filtered signal, r e denotes an error signal and

E 1 . denotes the energy estimate.

Eq. 8 is identical to Eq. 7 of the standard NLMS filter coefficient update in combina- tion with LPC pre-whitening, with the exception that the normalizing scaling factor has been

Λ ; -l ^ changed from the power sum ∑r x 2 {n - /)of the residual signal to an energy estimate E 1 . of

the of the residual signal.

As already discussed in connection with Eq. 2 and Eq. 7, the signal x(n) and thus the residual signal r x (n) may be also a complex-valued signal instead of a real-valued signal. In this case the conjugate complex r e (n) of the error signal has to be used in Eq. 8 instead of r e (n).

For an LPC whitening filter, many methods for calculating the LPC whitening filter coefficients are known, e.g. the autocorrelation method, the covariance method and the lattice method. Some of these methods include computationally efficient methods for determin- ing an estimate of the residual signal energy.

Preferably, for determining the filter coefficients of the LPC whitening filter the autocorrelation method is used. The autocorrelation method for determining the filter coefficients of an FIR linear predictor in an LPC whitening filter is e.g. described in section 8.1.1 of the already cited textbook "Digital Processing of Speech Signals", L. R. Rabiner and R. W. Schafer, Prentice-Hall, Englewood Cliffs, NJ, 1978. The description of the basic principles of linear predictive analysis in section 8.1, pages 398-401 of this textbook and the description of the autocorrelation method in section 8.1.1, pages 401-403 of this textbook are hereby incorporated by reference. The autocorrelation method leads to a matrix equation for determining the filter coefficients of the M th order FIR linear predictor in the LPC whitening filter.

Preferably, for solving such matrix equation, Durbin's algorithm is used. Durbin's algorithm for solving the matrix equation is e.g. described in section 8.3.2, pages 41 1-413 of the already cited textbook "Digital Processing of Speech Signals". This description of Durbin's recursive solution for the autocorrelation equations is hereby incorporated by refer- ence. The autocorrelation method and Durbin's algorithm for the autocorrelation equations is also described on pages 68-70 of the textbook "Digital Speech - Coding for Low Bit Rate Communication Systems", A. M. Kondoz, Second Edition, John Wiley & Sons, 2004. This description is also hereby incorporated by reference.

Durbin's algorithm for solving the autocorrelation equations uses the following recursive equations to determine the filter coefficients of the M th order FIR linear predictor in the LPC whitening filter:

R 1 = ∑x(m) - x(m + i) (Eq. 9) n=0

E 0 - R 0 (Eq. 10)

k, = 1 1 J

«:" = *, (Eq. 12)

<-» — k 1 ≤j ≤ i -\ (Eq. 13)

E 1 = 0 - E,-ι (Eq. 14)

In Eq. 9 for determining the autocorrelation R 1 , the term x(m) denotes selected samples of the input signal in a window of length L, the samples starting with x(m) and extending through x(m+L-l). Outside of the window, the input signal is considered as zero. Preferably, each time a new set of LPC coefficients is calculated, the input signal sample window shifts according to the LPC coefficient update rate (i.e. the window shifts by K input samples which is the inverse of the update rate, see Eq. 17 below) . In case of complex-valued input signals, the above equations 9, 13 and 14 need to be modified. In Eq. 9, the term x(m + ϊ) is replaced by the complex conjugate x * (m + i) . In Eq. 13, the term a)' ~ ^ is replaced by the complex conjugate α ,1 " • In Eq. 14, the term k t 2 is replaced by the square of the modulus , which is equivalent to k l • k * .

Eq. 10 to Eq. 14 are solved for iterations i = 1, 2, ..., M. The filter coefficients α ; of the M 1 order predictor are determined according to the following equation:

Ct 1 = Ct^ , \ <j ≤ M (Eq. 15) The term E 1 in iteration i corresponds to the squared prediction error for a linear predictor of order i. Preferably, the method according to the first aspect of the application determines the energy estimate E r as a squared prediction error E 1 of iteration i (i.e. the squared prediction error for a linear predictor of order i), with / e [0, 1, ...,M] . More prefera- bly, the determined energy estimate E 1 corresponds to the squared prediction error Ey of the last iteration of Durbin's algorithm. The squared prediction error E M used as an energy estimate is in this case determined based on the squared prediction error E M - I of the penultimate iteration of Durbin's algorithm. In line with Eq. 14 and i = M, the squared prediction error E M may be determined according to the following equation: £ u = (l - ^ 2 ) - £ v/ _, (Eq. 16)

When using E M as an energy estimate, the normalization value or normalization scaling factor preferably corresponds to XIE M -

Preferably, the energy estimate may be computed by 2 additional MAC operations. In case of using E M as an energy estimate, E M is typically computed in line with Eq. 16 by a first MAC operation, where (1 - k M 2 ) is determined, and by a second MAC operation, where the result of the first MAC operation is multiplied by £ M _, . It should be noted that the first MAC operation includes both a multiplication and an accumulation, whereas the second MAC operation includes only a multiplication. However, the processing cost for each is typically the same and both are therefore referred to as MACs. Please further note that for determining the filter coefficients a , E M does not have to be computed, i.e. E 1 in Eq. 14 is typically not computed in the last iteration where i = M. However, in case of using E M as an energy estimate for normalization, said 2 additional MAC operations for determining Eu are performed. Thus, once the LPC whitening filter coefficients have been determined using the recursive algorithm above or a modified version thereof, a simple additional calculation (i.e. 2 MACs) yields an estimate of the residual energy, which may be then inverted (1 division) to determine a normalizing factor. Please note that these computing costs (2 MACs and 1 division) are equivalent to the previously stated costs for determining the normalizing scaling factor for the NLMS algorithm.

Instead of inverting the energy estimate for determining a normalizing factor, one may also use the energy estimate as a divisor for dividing one or more factors of the product in Eq. 8. It should be noted that one may also use E M - I (or any other E 1 ) instead of E M as an estimate for the residual energy. In this case, the 2 additional MAC operations are typically not necessary, since E M - I was already computed for determining the filter coefficients a of the M th order predictor. Instead of using the autocorrelation method for determining the filter coefficients of the whitening filter, one may use a different method for determining the filter coefficients and may determine an energy estimate based thereon. E.g., in case of using the covariance method in combination with the Cholesky decomposition solution for LPC analysis, one may determine an energy estimate of the filtered signal based on Eq. 8.65 on page 411 of the al- ready cited textbook "Digital Processing of Speech Signals" (see mean-squared prediction error E n which can be used as an energy estimate). The description of the covariance method in section 8.1.2, pages 403-404 and the description of the Cholesky decomposition solution in section 8.3.1, pages 407-41 1 of the textbook "Digital Processing of Speech Signals" are hereby incorporated by reference. According to a preferred embodiment of the method, the filter coefficients of the first whitening filter are adaptively updated. An updated normalization value may be determined each time the filter coefficients of the first whitening filter are updated. The filter coefficients of the first whitening filter (in particular first LPC filter) may be updated less frequently than once per sample. Thus, the filter coefficients of the first whitening filter and the normalization value may be updated at an update rate lower than the sampling rate of the input signal upstream of the adaptive filter and the first whitening filter. E.g. the normalization value may be updated at an update rate of 1/64 updates per time unit, whereas the sampling rate corresponds to 1 sample per time unit.

Because the normalizing value is preferably only updated when the whitening filter coefficients are updated, the cost to compute the normalizing value is no longer constant, but depends on how often the whitening filter coefficients are updated.

However, in case of updating the filter coefficients less frequently than once per sample (e.g. once per 128 samples), the accuracy of the squared prediction error Eμ (or any other squared prediction error E) for estimating the energy of the filtered signal may de- crease, since the energy may change over the update period.

For a system in which the filter coefficients of the first whitening filter are updated once every K samples, the computational cost C per sample (!) of computing the preferred normalizing value or factor ME M is defined by the following relationship: C(K) = - MACs + - Divisions (Eq. 17)

K K

It is clear from this general relationship in Eq. 17 that as K increases, the cost per sample to compute the normalizing factor decreases, reducing the complexity with respect to standard NLMS (where the cost per sample for computing the normalizing scaling factor is 2 MACs + 1 Division).

Preferably, the update rate (i.e. MK in Eq. 17 above) for updating the filter coefficients of the first whitening filter and the normalization value is selected such that the statistical characteristics of the input signal are essentially stationary over the update period. In this case, the energy estimate and therefore the normalizing factor will be very similar to the normalizing factor in the NLMS algorithm. This allows a complexity reduction without significant impact to the performance. If however, the statistics of the input signal are highly non-stationary over the update period, a complexity reduction will still be achieved, but at the expense of convergence performance.

However, it is not necessary that the filter coefficients of the first whitening filter are updated over and over: The coefficients of the whitening filter and the normalizing value may be computed once and then remain fixed. Such case corresponds to an update period of K= co. The cost (per sample) of computing the normalizing scaling factor is 0, and the overall cost of performing the algorithm equals the cost of the LPC whitening filter and the LMS algorithm. Such fixed whitening filter coefficients and fixed normalizing value work very well in some applications, specifically in applications where the characteristics of the input signal are known a-priori and are time-invariant. In such applications, the fixed whitening filter coefficients and normalizing value are effectively time-invariant and so updating them periodically will not significantly improve performance. In case of K = 1 , the normalizing value is computed once per sample. In this case, the cost C(K = 1) of computing the normalizing value as given by Eq. 17 is equivalent to the cost for computing the normalization in the traditional NLMS algorithm. As this represents typically the maximum update rate, it shows that for any value of K, the cost of computing the normalizing value according to the preferred embodiment of the method is always less than or equal to the cost of computing the normalizing scaling factor for the NLMS algorithm.

A second aspect of the application relates to an apparatus for determining at least one updated filter coefficient of an adaptive filter adapted by an LMS algorithm. The means in the apparatus correspond to the method steps of the method according to the first aspect of the application. Thus, the apparatus comprises means for determining filter coefficients of a first whitening filter (e.g. of an LPC whitening filter) outputting a filtered signal, and for determining a normalization value based on one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter. The apparatus further comprises an update stage for determining at least one updated filter coefficient of the adaptive filter in dependency on the filtered signal and the normalization value. Preferably, the means for determining filter coefficients of the first whitening filter and for determining a normalization value form a common signal processing unit. The common unit transmits the determined normalization value to the update stage.

The above remarks related to the first aspect of the application are also applicable to the second aspect of the application.

A third aspect of the application relates to a filter system comprising an adaptive filter adapted by an LMS algorithm and a first whitening filter (e.g. an LPC whitening filter). Further, the filter system contains means for determining filter coefficients of the first whitening filter outputting a filtered signal and for determining a normalization value based on one or more computed values obtained in the course of determining the filter coefficients of the first whitening filter. In addition, for performing the LMS algorithm, the filter system comprises an update stage for determining at least one updated filter coefficient of the adap- tive filter in dependency on the filtered signal and the normalization value.

The complete filter system or parts thereof may be realized by a DSP (digital signal processor).

The above remarks related to the first and second aspects of the application are also applicable to the third aspect of the application. A fourth aspect of the application relates to a software program comprising instructions for performing the method according to the first aspect of the application, when the program is executed, e.g. on a computer or on a DSP.

The above remarks related to the first, second and third aspects of the application are also applicable to the fourth aspect of the application. A fifth aspect of the application relates to a method for filtering a signal by an adaptive filter, with the adaptive filter adapted by an LMS algorithm. At least one updated filter coefficient of the adaptive filter is determined according to the method as discussed above. The above remarks related to the other aspects of the application are also applicable to the fifth aspect of the application. DESCRIPTION OF THE DRAWINGS

The invention is explained below in an exemplary manner with reference to the accompanying drawings, wherein

Fig. 1 illustrates modeling of an unknown system by a system model; Fig. 2 illustrates an adaptive LMS filter system model;

Fig. 3 illustrates the use of an adaptive LMS filter for adaptive channel equalization; Fig. 4 illustrates an LPC whitening filter based on a linear predictor of M th order; Fig. 5 illustrates an adaptive LMS filter with pre-whitening of the input signal by means of an LPC whitening filter; Fig. 6 illustrates an adaptive LMS filter with pre-whitening outside of the main signal path;

Fig. 7 illustrates a first embodiment, with a normalizing factor being transmitted from an LPC whitening filter to an adaptive coefficient update stage; Fig. 8 illustrates a second embodiment, with a normalizing factor being transmitted from an LPC whitening filter to an adaptive coefficient update stage and pre- whitening being outside of the main signal path; Fig. 9 illustrates an adaptive equalizer as a first application; Fig. 10 illustrates an adaptive echo canceller as a second application; Fig. 1 1 illustrates a feedback adaptive noise control system as a third application; Fig. 12 shows a first simplified version of the third application in Fig. 1 1 ; and

Fig. 13 shows a second simplified version of the third application in Fig. 11. Figs. 1-6 were already discussed above.

Fig. 7 illustrates a first embodiment of the invention. The first embodiment is similar to the configuration in Fig. 5. Figurative elements in Fig. 5 and Fig. 7 denoted by the same reference signs are basically the same. Moreover, the remarks to Fig. 5 are basically also applicable to Fig. 7.

In Fig. 7, an input signal x(n) is fed to a unit 25' comprising an LPC whitening filter and LPC analysis means (i.e. means for determining filter coefficients of the LPC whitening filter). The LPC whitening filter is preferably an LPC whitening filter based on an FIR structure. The residual signal r x (n) as generated by LPC whitening filter 25' feeds an unknown system 21 , an adaptive filter 22 used to model the unknown system 21 and an LMS filter coefficient update stage 23' for determining updated filter coefficients of adaptive filter 22. An error signal r e (n) is generated by subtracting the output signal of adaptive filter 22 from the output signal of unknown system 21. Moreover, unit 25' for determining the filter coefficients also determines an energy estimate E r of the residual signal r x (n) based on computed values obtained in the course of determining the filter coefficients of the LPC filter. The energy estimate is then inverted and transmitted as a normalizing scaling factor X/ E 1 . from unit 25' to the adaptive coefficient update stage 23'.

IMPLEMENTATION

Preferably, as already discussed above, the squared prediction error E AI of the last iteration of Durbin's algorithm for solving the autocorrelation equations is determined and used as an energy estimate E 1 .. In case of using E M as an energy estimate E 1 . , E M is preferably computed by a first MAC operation, where (1 - k AI ~ ) is determined, and by a second MAC operation, where the result of the first MAC operation is multiplied by £ Λ/ _, (see Eq. 16).

Thus, once the LPC whitening filter coefficients have been determined using Durbin's algorithm, a simple additional calculation (i.e. 2 MACs) yields an estimate E M of the residual energy, which is then inverted (1 division) to determine a normalizing factor I /E M .

In case of using E M as an energy estimate E 1 . , the filter coefficients of the adaptive filter 22 are updated according to the following equation: h (n + \) = hχn) + — - μ - r (n - i) - r (n) , for i = 0, 1, ..., N-I . (Eq. 18)

E M

Here, the term XIE M corresponds to the normalizing scaling factor which is transmit- ted from unit 25' to update stage 23'. Alternatively, normalization may be performed by using E M as a divisor in a division operation (instead of a multiplication factor in a multiplication operation). Instead of transmitting XIE M , in an alternative embodiment E M or in another alternative embodiment E M - I and IC M are transmitted from unit 25' to update stage 23'. As already discussed in connection with Eq. 8, the signal x(n) and thus the residual signal r x (n) may be also a complex-valued signal instead of a real-valued signal. In this case the conjugate complex r e (n) instead of r e (n) has to be used in Eq. 18 .

Preferably, the filter coefficients of LPC whitening filter 25' and the normalization value are updated every K samples of the input signal x(n), with K > 1, e.g. K = 8, 16, 32 or 64. This reduces the computing cost for determining the normalizing scaling factor in com- parison to the computing costs for determining the normalizing scaling factor in the standard NLMS algorithm as already discussed above. Alternatively, K = I could be used. Although pre-whitening of the input signal x(n) improves the convergence performance of the adaptive LMS filter, it has the drawback that it alters the input signal to unknown system 21. In some applications, altering the input to the unknown system 21 is undesirable. In particular for these applications, one may move LPC whitening filter 25' outside of the main signal path, so that it does not alter the input to unknown system 21 and to adaptive filter 22. This approach was already discussed in connection with Fig. 6.

Fig. 8 illustrates a second embodiment, where in contrast to the first embodiment in Fig. 7 pre-whitening is performed outside of the main signal path as already discussed in connection with Fig. 6. Figurative elements in Figs. 6-8 denoted by the same reference signs are basically the same. In contrast to the first embodiment in Fig. 7, in Fig. 8 LPC whitening filter 25' (preferably based on an FIR structure) is moved into the update path of the LMS filter, i.e. in the path of update stage 27' parallel to adaptive filter 22. In Fig. 8, both adaptive filter 22 and unknown system 21 receive the non- whitened signal x(n) from upstream of LPC whitening filter 25', whereas in Fig. 7 adaptive filter 22 and unknown system 21 receive the filtered signal r x (n). In order to make such alternative system work, a further LPC whitening filter 26 is applied to the error signal e(n) so such the residual error signal r e (n) after whitening is used as an error signal for the LMS update algorithm. The LPC whitening filter in block 26 is identical to the LPC whitening filter in block 25', i.e. the filter coefficients of both filters are identical. Therefore, determining of the filter coefficients of LPC whitening filters 25' and 26 is carried out only in signal processing block 25' (see "LPC analysis" in block 25' of Fig. 8, which is missing in block 26) and the determined filter coefficients of LPC whitening filter 25' are sent to LPC whitening filter 26 (sending not shown in Fig. 8).

In Fig. 8 the normalizing scaling factor is determined in the same way as in Fig. 7, preferably the normalizing scaling factor corresponds to J /E M as already discussed in con- nection with Fig. 7. In this case a simple additional calculation (i.e. 2 MACs) yields an estimate E M of the residual energy, which is then inverted (1 division) to determine a normalizing factor I /E M - Preferably, the filter coefficients of the LPC whitening filter 25' and the normalization value are updated every K samples of the input signal x(n), with K > 1.

The filter update stage 27' receiving the normalizing scaling factor from unit 25' up- dates the filter coefficients of adaptive filter 22 in line with Eq. 18.

As already discussed above, for the systems in Figs. 7 and 8 where the filter coefficients of the first whitening filter and the normalizing scaling factor are updated once every K samples, the computational cost C per sample for computing the preferred normalizing scaling factor HE M is defined by Eq. 17 as repeated below: C(K) = - MACs + - Divisions (Eq. 17)

K K

As K increases starting from 1 , the cost per sample to compute the normalizing factor decreases, reducing the complexity with respect to standard NLMS (where the cost per sample for computing the normalizing scaling factor is 2 MACs + 1 Division). Preferably, the update rate for updating the filter coefficients of the LPC whitening filter 25' and for updating the normalizing scaling factor is chosen such that the statistical characteristics of the input signal are essentially stationary over the update period. In this case, the energy estimate and therefore the normalizing factor will be very similar to the normalizing factor in the NLMS algorithm. This allows a complexity reduction without significant impact to the performance.

While it is useful to analyze the reduction of cost of calculating the normalizing term, it is also interesting to examine what percentage of the computational cost of the entire adaptive filtering algorithm that reduction represents.

For calculating the complexity reduction of the entire adaptive filtering algorithm, the following assumptions are made:

1. Computing the filter output of an adaptive length-N FIR filter (i.e. adaptive filter 22 or linear predictors in LPC whitening filters) requires N MACs per sample.

2. Performing a length-N LMS filter coefficient update requires N MACs.

3. The length of the LPC analysis window (i.e. the length of the window used for calcu- lating the filter coefficients of the linear predictor) is chosen to be equal to the LMS filter order N.

4. LPC analysis for a length-M LPC filter with a length-N analysis window requires N-(M+l) MACs.

5. LPC analysis update period (inverse of update rate) is chosen to be equal to 1/2 of the length of the LMS filter.

6. One division requires 24 MACs.

7. Two LPC whitening filters are used as shown in Figs. 6 and 8.

The cost Xl for using the NLMS algorithm updated every sample is:

X 1 = A + B + C + D + E1 + F1 + G (Eq. 19) Here, Xl indicates the average number of MACs per sample required for the complete NLMS including pre- whitening. The term A (= N) corresponds to the number of MACs for the adaptive FIR filter (see filter 22 in Fig. 6). The term B (= N) indicates the number of MACs for the LMS coefficient update. The term C (= M) corresponds to the number of MACs for the first LPC whitening filter (see filter 25 in Fig. 6). The term D (=M) indicates the number of MACs for second LPC whitening filter (see filter 26 in Fig. 6). The term El (= 2) refers to the number of MACs required to compute a normalizing term, i.e. the power sum (the power sum is computed once per sample). The term Fl (= 24) corresponds to the number of MACs required to compute a normalizing scale factor based on the normalizing term (division), which is performed once per sample. The term G indicates the average number of MACs per sample required to perform LPC analysis, when assuming an update period of N/2. In view of assumption 4 above, the term G is equal to (N-(M+l))/(N/2) = 2-(M+l).

On the other hand, the cost X2 (i.e. the average number X2 of MACs per sample) for performing the preferred embodiment as shown in Fig. 8 is:

X2 = A + B + C + D + E2 + F2 + G (Eq. 20)

In Eq. 20, the term A (= N) refers to the number of MACs for the adaptive FIR filter (see filter 22 in Fig. 8). The term B (= N) indicates the number of MACs for the LMS coefficient update. The term C (= M) corresponds to the number of MACs for the first LPC whit- ening filter (see filter 25' in Fig. 8). The term D (= M) denotes the number of MACs for the second LPC whitening filter (see filter 26 in Fig. 8). The term E2 refers to the average number of MACs per sample required to compute a normalizing term, when assuming an update period of N/2. The term E2 is equal to 2 / (N/2) = 4/N. The term F2 indicates the average number of MACs per sample required to compute the normalizing scale factor based on the normalizing term (division), assuming an update period of N/2. The term F2 is equal to 24 / (N/2) = 48/N. The term G corresponds to the average number of MACs per sample required to perform an LPC analysis, assuming an update period of N/2. In view of assumption 4 above, the term G is equal to (N-(M+ l))/(N/2) = 2-(M+l).

The MIPS savings S (in percent) of the disclosed method with respect to NLMS can be estimated to:

S = 100 - (l - (X2 / Xl)) (Eq. 21)

The table hereafter presents the estimated complexity reduction S in line with Eq. 21 for the preferred embodiment over standard NLMS for a variety of adaptive filter lengths N and predictor orders M. For all entries, both an LPC analysis window length of N (see as- sumption 3 above) and an update period (inverse of update period) of K = N/2 (see assumption 5 above) are assumed.

As can be seen in the table, the effective complexity reduction S is roughly inversely proportional to the adaptive filter order N and - to a smaller extent - to the predictor order M. This reduction is particularly important with respect to portable devices that rely upon battery power, in that it provides longer battery life between recharges and/or allow processor resources to be allocated to other tasks.

The methods, apparatuses and filter systems disclosed in the application may be useful in any product that contains an adaptive LMS filter which converges based on an unknown input signal. In particular, the methods, apparatuses and filter systems disclosed in the application may be used in an adaptive equalizer, in an adaptive echo canceller or in a feedback adaptive noise control system. In Figs. 9-13 such applications are illustrated. The first application in Fig. 9 is an adaptive equalizer, in which a signal passes through a channel 30 with an unknown frequency response which alters the input signal x(n) in an undesirable manner. A filter system comprising an adaptive filter 31 , LPC pre- whitening filter blocks 32 and 33 and computation of a normalizing scaling factor in the filter block 32 based on the LPC analysis in the update path is used to make the overall frequency response of the signal path converge to the desired frequency response. The coefficients of adaptive filter 31 are updated in a filter coefficient update stage 34. This filter system is similar to the filter system in Fig. 8 (instead one could also use the structure of Fig. 7). In many cases, the desired overall frequency response is flat, and box 35 representing the filter with the desired frequency response is removed entirely (in this case box 35 is replaced by a through connection).

The second application in Fig. 10 is an adaptive echo canceller, in which a signal picked up by a near-end microphone (see microphone input in Fig. 10) is transmitted via a speech encoder 45 and a transmitter 46 to a far-end speaker (not shown in Fig. 10, far-end speaker is part of channel 40), picked up by a far-end microphone (not shown in Fig. 10, far- end microphone is part of channel 40) and returned via a receiver 47 and a speech decoder 48 to a near-end speaker (see speaker output in Fig. 10) as an undesirable echo. In Fig. 10 an adaptive filter (see echo path model FIR filter 41) with pre-whitening (see whitening filters 42 and 43) and computation of a normalizing scaling factor based on the LPC analysis in the update path is used. The coefficients of FIR filter 41 are updated in a filter coefficient update stage 44. The structure is similar to the structure of the filter system in Fig. 8 (instead one could also use the configuration of Fig. 7). The adaptive filter models the far-end echo path and produces an estimate y '(n) of the echo signal returned to the near-end speaker. The echo estimate output y '(n) from the adaptive filter is subtracted from the signal y(n) returned to the near-end speaker in order to cancel the undesirable echo. The third application in Figs. 1 1 - 13 is a feedback adaptive noise control system, in which a signal y(n) output to a loudspeaker acoustically coupled to a human ear is distorted by the presence of an undesired environmental noise source n(n). The loudspeaker is part of an acoustic plant 50 corresponding to the acoustic channel. A signal y / (n) picked up by a microphone (the microphone is part of acoustic plant 50) located in close proximity to the speaker contains the sum of the signal output from the loudspeaker and the interfering noise n(n). A plant model filter 51 generates an estimate y 2 '(n) of the signal output from the loudspeaker, which is subtracted from the signal picked up by the microphone to generate an error signal e 2 '(n). The error signal e 2 '(n) is passed through a control filter 58 to generate a feedback anti-noise signal, which is added to a desired audio input signal x(n) to generate a modified input signal y(n). The anti -noise signal, when reproduced through the loudspeaker, will be equal in amplitude but opposite in phase compared to the undesired noise source n(n), resulting in a reduction in perceived noise level.

Figure 1 1 is a version of the third application in which both the plant model filter and the control filter are filter systems comprising an adaptive filter (see plant model FIR filter 51 and control FIR filter 58) with pre-whitening (see LPC whitening filters 52 and 53 for plant model filter, and LPC whitening filters 54 and 55 for control filter) and computation of a normalizing scaling factor based on LPC analysis in the update path. The filter coefficients are updated in filter coefficient update stages 56 and 57. These filter systems are similar to the filter system in Fig. 8 (instead one could also use the structure of Fig. 7). Figure 12 is a second version of the third application. Figurative elements in Figs. 1 1 and 12 denoted by the same reference signs are similar or the same. In Fig. 12 the control filter is a filter system comprising an adaptive filter (see control FIR filter 58) with pre- whitening (see LPC whitening filters 54 and 55) and computation of a normalizing scaling factor based on LPC analysis in the update path. The filter coefficients are updated in a filter coefficient update stage 57. The filter system is similar to the filter system in Fig. 8 (instead one could also use the structure of Fig. 7). The plant model filter 51 ' is not adapted in this version of the third application.

Figure 13 is a third version of the third application. Figurative elements in Figs. 11 and 13 denoted by the same reference signs are similar or the same. In Fig. 13 the plant model filter is a filter system comprising an adaptive filter (see FIR filter 51 ) with pre- whitening (see LPC whitening filters 52 and 53) and computation of a normalizing scaling factor based on LPC analysis in the update path. The filter coefficients are updated in a filter coefficient update stage 56. The filter system is similar to the filter system in Fig. 8 (instead one could also use the structure of Fig. 7). The control filter 58' is not adapted in this version of the third application.

The present patent application provides new methods for adaptive filtering with input signal pre-whitening. In particular, the application provides new methods for reducing complexity in an adaptive LMS filtering algorithm utilizing LPC pre-whitening filters. When computing the normalizing factor by 2 MAC operations and 1 division, the complexity reduction with respect to the NLMS algorithm is inversely proportional to the frequency at which the LPC pre-whitening filter coefficients are updated. In this case, for all LPC pre- whitening filter coefficient update rates, the computational complexity is less than or equal to the standard NLMS algorithm.