Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
FAST CONVERGING ADAPTIVE FILTER
Document Type and Number:
WIPO Patent Application WO/1995/006986
Kind Code:
A1
Abstract:
The present invention provides a fast affine projection adaptive filter (100) for frequent parameter updating and fast convergence with low complexity. The adaptive filter (100) generates an echo estimate signal based on an excitation signal wherein the adaptive filter comprises a relaxed E(n-1) and E(N-1, n) generator (130) coupled to a sliding windowed fast recursive least squares (FRLS) filter (125) for generating a fast affine projection coefficient and a fast affine projection correction vector. The adaptive filter (100) further comprises a provisional filter coefficient generator (145) and a provisional echo estimate correction signal generator (150, 155) for respectively generating a provisional echo estimate signal and a provisional echo estimate correction signal that are summed up to produce the echo estimate signal.

Inventors:
GAY STEVEN LESLIE
Application Number:
PCT/US1994/009753
Publication Date:
March 09, 1995
Filing Date:
August 30, 1994
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AT & T CORP (US)
International Classes:
H03H21/00; H04B3/23; H04M9/08; (IPC1-7): H04J1/00; H04J3/00; H04M1/00; H04M9/00; H04M9/08; G06F17/00
Foreign References:
US5001701A1991-03-19
US5177734A1993-01-05
Download PDF:
Description:
FAST CONVERGING ADAPTIVE FILTER

Field of the Invention

This invention relates to adaptive filters and, more particularly, to adaptive filters requiring fast convergence to a desired impulse response.

Background of the Invention

Adaptive filters are filters which adjust their filter parameters to obtain a desired impulse response of an unknown system. Parameter adaptation is based on signals exciting the system and the signals which are the system's response. The adaptive filter generates an error signal reflecting the difference between the adaptive filter's actual impulse response and the desired impulse response. Adaptive filters have found application in diverse areas such as data communications, where they are used in data echo cancellers and equalizers; target tracking, where they are used in adaptive beam formers; and telephony, where they are used in speech coders and electrical and acoustic echo cancellers. In adaptive filter design, a trade-off may exist between the speed of filter convergence (to a desired impulse response) and the computational complexity imposed on the processor implementing the filter. So, for example, conventional adaptive filters such as the affine projection adaptive filter (APAF) have been shown to achieve fast convergence at the expense of great computational complexity. Because of the complexity of these techniques the filter coefficients may not be updated as often as possible, that is, every sample period. Thus, the convergence of the adaptive filter coefficients is undesirably slowed.

Summary of the Invention

The present invention provides an adaptive filter capable of frequent parameter updating (e.g., every sample period) and thus fast convergence with low complexity. Embodiments of the present invention mitigate APAF complexity due to matrix inversion and the fact that a given excitation vector is weighted and summed into the adaptive coefficient vector many times. An illustrative embodiment of the present invention achieves fast convergence through sample-by-sample updating with low complexity by exploiting the so-called "shift invariant" property of the filter excitation signal to simplify the matrix inverse and by a vector update procedure in which a given excitation vector is weighted and added into the coefficient vector estimate only once.

Brief Description of the Drawings

FIG. 1 presents an adaptive filter used in an acoustic echo canceller application;

FIG. 2 presents a first illustrative embodiment of the present invention in an acoustic echo canceller application;

FIG. 3 presents a detailed signal flow diagram of the Relaxed E N _ ι )D and EN- I, Π Generator of FIG. 2;

FIG. 4 presents a detailed signal flow diagram of the h n Generator and Filter of FIG. 2; FIG. 5 presents a detailed signal flow diagram of the e n Generator and

Filter of FIG. 2;

FIG. 6 presents a detailed signal flow diagram of the Signal Correlator of FIG. 2;

FIG. 7 presents a second illustrative embodiment of the invention in the context of an acoustic echo canceller application;

FIG. 8 presents a detailed signal flow diagram of the Fast EN_ l n and EN- ι, n Generator of FIG. 2; and

FIG. 9 presents a third illustrative embodiment of the present invention in the context of an acoustic echo canceller application.

Detailed Description

A. Introduction

FIG. 1 presents a block diagram of an adaptive filter 50 embedded in, for example, and echo canceller application. An incoming digitally sampled far-end excitation signal x n is supplied to the adaptive filter 50 and, to digital-to-analog (D/A) converter 105. D/A converter 105 converts the digitally sampled far-end signal x n to an analog form x(t) in a conventional manner well-known to those skilled in the art. Far-end signal x(t) is then applied to echo path 110 with analog impulse response h ep (t) producing the analog echo signal d(t). In this example, echo path 110 can be either a long electrical path (such as that in a telecommunications network) or an acoustical path (such as that of a room). As such, echo cancellers may be used, e.g., in conjunction with a telecommunications network switch or a speaker phone. Analog summer 115 schematically represents the addition of echo d(t) to the near-end signal y(t). The near-end signal y(t) consists of a high level near-end talker signal which is sometimes present and/or a

(usually) low level near-end background-noise which is always present. The output of schematic analog summer 115 is the overall analog system response signal s(t) which is applied to the adaptive filter 50 via analog-to-digital converter 120. Signal s n is the digitally sampled version of the overall analog system response signal s(t). For purposes of presentation clarity, the analog echo path impulse response h ep (t), analog echo signal, d(t), and analog near-end signal y(t) will be replaced in the remainder of the detailed description with their digitally sampled counterparts, vector h ep , digital signal d n , and digital signal y n , respectively, in a manner familiar to those skilled in the art. Adaptive filter 50 generates return signal e n by subtracting the echo estimate, d n from the system response, s n . The return signal e n is then sent to the far-end and supplied to the adaptive filter 50 which uses it to adjust its coefficients. Return signal e n comprises the sampled near-end signal y n and, when the adaptive filter 50 is not yet converged, a residual of uncancelled echo.

1. The Affine Projection Adaptive Filter The affine projection adaptive filter (APAF) (K. Ozeki, T. Umeda, "An

Adaptive Filtering Algorithm Using an Orthogonal Projection to an Affine Subspace and Its Properties," Electronics and Communications in Japan, Vol. 67-A, No. 5, 1984) in a relaxed and regularized form, is defined by the following two equations:

e n =s n -Xnh„-M (1)

hn =hn-M +μ n [ n +δl] β„ . (2)

This is a block adaptive filter where the block size is N. The excitation signal matrix, X n , is of dimension L by N and has the structure,

Xn = | Xn - *n- l » ' " 2i.n-(N- l)J 3)

where x n is the L-length excitation vector of the filter at sample period n defined as x n = x n , • • • x„-- L+ ι (where superscript "t" indicates a transpose). The adaptive tap weight vector is h n = [h 0ιDι ...,h L _ ι >n ], where h iιn is the ith tap weight at sample n. The N-length return signal vector is defined as e n = e n , • • • e n _ N+ 1 , the N- length system output vector is defined as s n - s n , • • s n _ N+ 1 , and is the sum of

the N-length echo estimate vector d n = d n , - - - d n _ N+ 1 and the N-length near- end signal vector, y n = [y n , • • • y n _ N+ 1 J ,

Sn =d n + yn = nhep +y n - (4)

The step-size parameter, μ is a relaxation factor. The adaptive filter is stable for 0<μ<2.

The scalar δ is the regularization parameter for the autocorrelation matrix inverse. Where X n X n may have eigenvalues close to zero, creating problems for the inverse, X n X n + δl has δ as its smallest eigenvalue which, if large enough, yields a well behaved inverse. At each iteration, the signal in the adaptive filter moves forward by M samples. In practical implementations, M is set to the block size, N, to mitigate the O(N 2 ) computational complexity of the calculation of the vector

ε^ ^X^δl] "1 ^ (5)

£ n = [e 0 , n . • • • , ε N _ lιn J (6)

(S. G. Kratzer, D. R. Morgan, "The Partial-Rank Algorithm for Adaptive

Beamforming," SPIE Vol. 564 Real Time Signal Processing VIII, 1985; and P. C. W. Sommen, "Adaptive Filtering Methods," Ph.D. Dissertation, Technische Universiteit Eindhoven, June 1992). However faster convergence and/or a lower converged system misadjustment can be achieved if M is set to a lower value.

2. Projections Onto An Affine Subspace and Convergence

By manipulating equations (1), (2), and (4), and assuming that δ is small enough to be ignored and the additive system noise, y n is zero, the APAP tap update can be expressed as,

hn =Qnhn-M +P n hep (7)

where

=U n diag 1 -μ, . . . . l -μ, 1, • • • , l u , (8)

Pn =I-Qn . (9)

and the diagonal matrix in (8) has N ( 1 - μ)'s and L-N l's along the diagonal. The matrices, Q n and P n represent projection matrices onto orthogonal subspaces when μ = 1 and relaxed projection matrices when 0 < μ < 1. Thus (7), and therefore (1) and (2), represent the (relaxed) projection of h m _ M onto the affine subspace defined by the (relaxed) linear projection matrix, Q n , and the offset vector, P n h ep -

Equation (7) provides insight into the convergence of h n to h ep . Assume that μ= 1. As N increases from 1 toward L, the contribution to h n from h n _ M decreases because the nullity of Q n is increasing, while the contribution from h ep increases because the rank of P n is increasing. In principle, when N=L, h n should converge to h ep in one step, since Q n has a rank of zero and P„ a rank of L. In practice however, as N approaches L the condition number of the matrix, X n X n begins to grow. As a result, the inverse of X|,X n becomes more and more dubious and must be replaced with either a regularized or pseudo-inverse.

3. The Connection Between APAF, Least Squares Adaptive Filter, and RLS

Using the matrix inversion lemma the APAF tap update described in (2) can be written as

h n = h n - M + μ [X n XΪ, + δl] X n e n (10)

(S. G. Kratzer, D. R. Morgan, "The Partial-Rank Algorithm for Adaptive Beamforming," SPIE, Vol. 564, Real Time Signal Processing Vm, 1985). Note that X n X n is an L by L rank deficient estimate of the autocorrelation matrix and that its inverse is regularized by the matrix δl. Consider the case where the sample advance, M, is one and consider the vector e n . By definition,

~ n 2__n_n— 1 n ~.ϋn Xnlln- 1 (ID n-l ~Xn-llln-l

Where the matrix X n _i has dimension L by (N-l) and consists of the N-l left-most (newest) columns of X n _ι and the N-l length vector si n _ι consists of the N-l upper (newest) elements of the vector s; n _ i .

First, consider the lower N- 1 elements of (11). Define the a posteriori return signal vector for sample period n-l, e ι >n _ι as

£l,n-l - n-1 "Xn-lln-l

-1

I-μX lXn-l[Xn-lXn-l+δl] e n -ι (12)

Now make the approximation,

Xn- l Xn- l +δI~X n _ιX n _ι (13)

which is valid as long as δ is significantly smaller than the eigenvalues of

X n _ ιX n _ i . Of course, this means that δl no longer regularizes the inverse of the N by N sample autocorrelation matrix. However it still regularizes the rank deficient L by L sample autocorrelation matrix inverse of (10). Using this approximation,

eι, n -ι=(l-μ)e n -ι. (14)

Recognizing that the lower N-l elements of (11) are the same as the upper N- 1 elements of (12), (14) can be used to express e n as

~ a __-nI_n— 1 e„

£n : (15)

(l-μ)e n _ ! (l-μ)e n

Where e n _ ι is an N-l length vector containing the uppermost elements of e n _ t . For μ=l,

e„= (16)

0

Using (16) in (10),

hn =hn- l + [XnXn +δl] X D ^ n ( 17)

Equation (17) is a regularized, rank deficient least squares tap update. If N=n, δ=0, and the matrix inversion lemma is applied to a rank-one update of the inverted matrix in (17), (17) becomes the growing windowed RLS adaptive filter.

Indeed, equation (17) can be used as an alternative starting point to our adaptive filter of the present invention and FAP adaptive filtering (with μ= 1) can be thought of as a fast, regularized, rank deficient least squares adaptive filter. One advantage of this interpretation is that (10) may be obtained from (17) where an

alternative definition for e n , namely, e n = .e. is used. As a side benefit, this

obviates the need for the approximation in (13) and δ can once again be chosen large enough to regularize small eigenvalues in X n X n . While the two adaptive filter represented by relations (2) and (17) are slightly different, they yield the same convergence curves with only a modification of the parameter δ.

4. The Fast Affine Projection (FAP) Adaptive Filter

In echo cancellation the return signal vector, e„, is the directly observed part of the adaptive filter, while the adaptive filter taps, h n are not directly observed. Therefore, it is permissible to maintain any form of h n that is convenient in the adaptive filter, as long as the first sample of e n is not modified in any way from that of equation (1). In the illustrative filters, the fidelity of e n is maintained at each sample period, but h n is not. Another vector, called the provisional echo path estimate h n is maintained instead. The provisional echo path estimate h„ is formed using only the last column of X n , x n - N+ ι . weighted and accumulated into h n _ i each sample period. The proper weighting factor, μE N _ l n is found below. This is N times less complex than the weighting and accumulating all N columns of X n as is done in equation (2) for the h n update.

Using (5) in (2) the APAF tap update can be expressed as,

h n =h n -ι +μX n ε n . (18)

The current echo path estimate, h„, may be expressed in terms of the original echo

path estimate, h 0 , and the subsequent Xi's and ε,'s,

The vector/matrix multiplication may be expanded,

Assuming that x n = 0 for n<0, (20) can be rewritten as,

n- l N- l

If the first term and the second pair of summations on the right side of (21) are defined as

and the first pair of summations in (21) is recognized as a vector-matrix multiplication,

N-l k

XnE n = μ ∑ Xn-k ∑ ε j, n -k+j (23) k=0 j=0

where,

then, (21) can be expressed as

h n = h n _ 1 +μX n E n . (25)

It is seen from (22) that

-_ln- l + μXn-(N- l)EN- l,n (27)

Using (27) in (25) the current echo path estimate can alternately be expressed as

h n =h n +μ I n (28)

Where E n is an N-l length vector consisting of the upper most N-l elements of E n .

The vector h n is the provisional echo path estimate. Observing (24) it is seen that E n can also be calculated recursively. By inspection,

Now, consider the relationship between e n and e n _ i . From equation (15) it is apparent that the only real difficulty in calculating e n from e n _ i is in calculating e n since h n _ i is not readily available. By definition,

e n =s n -d n (30)

Using (28) for h n .. ! in the definition of d n yields

d n =x l n h n _ι (31 )

where d n is the provisional echo estimate produced by the response of the

provisional echo path estimate, h n _ ι from the excitation vector, _ x_ n -

d n =x[,hn-ι (32)

and the correlation vector, r xx n is defined as,

J_.j«,n Xxx,n- 1 + Xn____n "n-L__. n-L ' (33)

and

To efficiently compute (29) a recursion for the vector ε n is needed. Define R n = X n X n +81 and let a n and b n denote the optimum forward and backward linear predictors for R n and let E a n and E bιD denote their respective expected prediction error energies. Also, define R n and R n as N-l by N-l matrices consisting of the upper left and lower right corners of R n , respectively. Then, given the following identities:

and the definitions,

(where e n is an N-l length vector containing the N-l lower elements of e n ) and

_~n — *^n Hn > (37)

one can multiply (34) from the right by e n and use (5) and (36) to obtain,

n = n j (38)

Similarly, multiplying (35) from the right by e n , using (5) and (37), and solving for

The quantities, E a n , E b n . a n , and b n . can be calculated efficiently (complexity ION) using a conventional sliding windowed fast recursive least squares (FRLS) adaptive filter (J. M. Cioffi, T. Kailath, "Windowed Fast Transversal Filters Adaptive Algorithms with Normalization," IEEE Transactions on Acoustics, Speech and Signal Processing, Vol. ASSP-33, No. 3, June 1985; and A. Houacine, "Regularized Fast Recursive Least Squares Algorithms for Adaptive Filtering," IEEE Trans. Signal Processing, Vol. 39, No. 4, April. 1991).

The relationship between ε n and ε n _ i is now investigated. It can be shown that

Using (40), the definition of e n , e n , (36), (37), and (15) we have,

~n ~ Rn£n

=R n _ 1 (l -μ)e n _ι =(l -μ)ε n -ι .

B. Illustrative Embodiments for Use in Echo Cancellation

In this section, three illustrative FAP adaptive filters in accordance with the present invention are presented, one with relaxation, 0 < μ < 1 , one without relaxation, μ= 1, and one where the adaptation has been suspended. In all three cases, the sample advance, M, is set to one.

For clarity of explanation, the illustrative embodiments of the present invention are presented as comprising individual functional blocks. The functions these blocks represent may be provided through the use of either shared or dedicated hardware, including, but not limited to, hardware capable of executing software. For example, the functions of processors presented in FIGs. 2 and 7 may be provided by a single shared processor. (Use of the term "processor" should not be construed to refer exclusively to hardware capable of executing software.)

Illustrative embodiments may comprise digital signal processor (DSP) hardware, such as the AT&T DSP 16 or DSP32C, read-only memory (ROM) for storing software performing the operations discussed below, and random access memory (RAM) for storing DSP results. Very large scale integration (VLSI) hardware embodiments, as well as custom VLSI circuitry in combination with a general purpose DSP circuit, may also be provided.

1. Relaxed FAP Adaptive Filter A first illustrative embodiment of the present invention is directed to a relaxed FAP adaptive filter. The filter process in accordance with this embodiment is summarized as follows:

Initialization: a 0 = M , 0 l J > b 0 = 1 , lj , and E aιD =E b>n =δ.

2. Use sliding windowed Fast Kalman or Fast Transversal Filters to update E a n , E „, a n , and b n .

3- ΪH.,n l +Xn « n - X n-L«n-L

4. d n = X n ]ln- l

5. d n = d n + μr XXιn E n _ 1 , e n =s n -d n

6. e n =

( l -μ) e n _ !

0 1 t 7. ε n = + ~~ a n ane n

In ' j -' a.ii

n b n n e,

- " E b,n

10. h n =h n - i +μXn-(N-l)EN-l,n l l. ε n+ 1 = (l -μ)ε n Step 2 is of complexity ION when FTF is used. Steps 4 and 10 are both of complexity L, steps 3, 7, and 8 are each of complexity 2N, and steps 5, 6, 9 and 11 are of complexity N. This gives an overall complexity of 2L+20N.

FIG. 2 presents the first illustrative embodiment of the present invention with relaxation in the context of an echo canceller. The far-end or "excitation" signal, x n is applied to a conventional sliding windowed fast recursive least squares (SW-FRLS) filter 125. SW-FRLS filter 125 produces in response, the forward prediction coefficient vector a D , the forward prediction error energy E a n , the backward prediction coefficient vector b„, and the backward prediction error energy E b(n as is conventional. These output signals are in turn applied to relaxed E n - i and E N _ l n generator 130 (discussed below in connection with FIG. 3). Also applied to relaxed E n _ι and E N _ l n generator 130 is parameter 1 -μ and the return signal vector e n which is produced by e n generator 135. Relaxed E n -ι and E N -ι, n generator 130 produces the scalar output E - ι ,n and the vector output E n - 1 . The scalar E N - ι, n is multiplied by the relaxation factor μ by multiplier 140 creating signal β 3 n which is then applied to h n generator and filter 145 (discussed below in connection with FIG. 4). The excitation signal x n is also applied to h n generator and filter 145 which produces the provisional echo estimate signal d n . Signal correlator 150 receives signal x„ as input and produces in response correlation vector -^ n which is in turn applied to dot product generator 155. Vector E n _ i from relaxed E n _ ! and EN- ι ,n generator 130 is also supplied to dot product generator 155 whose scalar output is multiplied by relaxation factor μ by multiplier 160. The output of multiplier 160 is supplied to summer 165 where it is added to provisional echo estimate d n producing echo estimate d n . Echo estimate d n is then subtracted from system response signal s n by summer 122, producing return signal e n . Return signal e n is returned to the far-end as output and is also supplied to e n generator 135. The scalar quantity 1 -μ is also supplied to e n generator 135 which produces the return

signal vector e n or input into relaxed E n _ ! and E N _ l n generator 130.

FIG. 3 shows the signal flow diagram of relaxed E n -ι and E N _ 1 >n generator 130 of FIG. 2. Input forward linear prediction vector a n and return signal vector e n from filter 125 are supplied to dot product generator 202. Also, forward

5 prediction error energy E a n , also from filter 125, is supplied to inverter 204. The outputs of dot product generator 202 and inverter 204 are supplied to multiplier 206 which produces signal β a n . Input backward linear prediction vector b n and return signal vector e n are supplied to dot product generator 210 from filter 125. Also, backward prediction error energy E b)n is supplied to inverter 212 also from filter

10 125. The outputs of dot product generator 210 and inverter 212 are supplied to multiplier 214 which produces signal β b n . The N elements of forward prediction vector a n are denoted as elements ao, n through a N -ι, n - Similarly, the N elements of backward prediction vector b n are denoted as elements b 0,n through b -ι ,n - Forward linear prediction vector element a 0,n is multiplied by signal β a n at

15 multiplier 215-0. The output of multiplier 215-0 is signal ε 0>n Signal ε 0.n is the first element of the N-length signal vector ε n which has elements εo ,n to ε -ι, n - Elements ε k n of signal vector ε n where 0<k≤N- 1 are produced in the following manner. Backward linear prediction vector element b]-_ ι ,n and signal β b n are supplied to multiplier 220-k-l whose output is summed with ε k _ l ιD at summer 225-

20 k- 1. The output of summer 225-k- 1 is then multiplied by the factor 1 - μ at multiplier 230-k-l and the result is sent to sample delay memory element 235-k-l. Forward linear prediction vector element a k , n is multiplied by signal β a>n at multiplier 215-k and the result is summed with the output of sample delay memory element 235-k-l at summer 240-k whose output is ε k>n .

25 The N- 1 length vector E n - 1 is formed from the first N- 1 elements of ε n , εo, n through ε N - 2 ,n in the following manner. The first element of vector E n _ ! , Eo,n-ι is formed by delaying εo ,n one sample period in sample delay memory element 250-0. Then elements E k)n _ι where 0<k≤N-2 are formed by adding E k , n-ι to ε k>n at summer 255-k-l, applying the result to sample delay memory

30 element 250-k, the output of which is E k n - 1 • Finally, output E N _ i, n is formed by adding ε N -ι, n to E N - 2 ,n-ι at summer 255-N-l.

FIG. 4 shows h n generator and filter 145 of FIG. 1. h n generator and filter 145 consists of h n generator 301 and filter 326. The inputs to h n generator are excitation signal x n and signal β 3,n . Excitation signal x n is supplied to N-2 sample 35 delay memory element 305 whose output is x n _ N+2 - Signal x n _ N+2 is supplied to signal delay memory elements 310 whose outputs are X D -N+ I through X Π _L-N+2

which are the elements of the signal vector x n _ N + 1 • Signal vector x n - N + 1 i multiplied by input β 3 at multipliers 315, the outputs of which are summed with the outputs of sample delay elements 320 at summers 325. The outputs of summers 325 are the provisional adaptive filter coefficients, h n whose L elements are h 0,n through

5 h L -ι, n which are in turn the output of h n generator 301. The inputs to filter 326 are excitation signal x„ and provisional echo path estimate h n . Signal x n is supplied to signal delay memory elements 330 whose outputs are x n _ ι through x n _ L+ 1 which together with signal x n are the elements of the signal vector x n . Elements of vector x n are multiplied with elements of vector h n by multipliers 335 the outputs of which 10 are all summed together at summer 340. The output of summer 340 is the provisional echo estimate d n , the output of filter 326.

FIG. 5 shows e n generator 135 of FIG. 1. Return signal e n and parameter 1 -μ are inputs to e n generator 135. The output of e n generator is the N- length vector e n which consists of elements eo, n through e N ,n - Output element

15 e 0 , n is simply signal e n . Output elements e k n for 0 <k<N- 1 are formed in the following manner. Output element e k _ i n is multiplied by parameter 1 - μ at multiplier 400-k-l and the output of multiplier 400-k-l is supplied to sample delay element 405-k-l whose output is e k , n -

FIG. 6 shows signal correlator 150 of FIG. 1. The input to signal 0 correlator 150 is the excitation signal x n - Input signal x n is supplied to the N-l sample delay elements 505 creating N-l delayed outputs x n _ι through x n _ N+ ι which form the vector α n . The vector αg is multiplied by signal x n in scalar/vector multiplier 510. Signal x n - N+ 1 which is output from the last sample delay element 505 is supplied to L-N+l sample delay element 515 whose output is signal X D -L-

25 Signal x n _ L is supplied to the N-l sample delay elements 520 creating N-l delayed outputs x n _ L -ι through x n _ - N+ ι which form the vector α n _ . The vector a Ω l_ is multiplied by signal x n - L in scalar/vector multiplier 525. The N-l length output vector of scalar/vector multiplier 510 is supplied to vector summer 530 which performs an element to element summation of the output of scalar/vector multiplier

30 510 with the N- 1 length vector χ n - 1 • The N- 1 length vector output of vector summer 530 is then supplied to vector summer 535 where the N-l length vector output of scalar/vector multiplier 525 is subtracted. The N-l length output vector of vector summer 535 is signal correlator 150 output vector ? n . Vector ? n supplied to vector sample delay element 540. The output of vector sample delay element 540 is vector

In- 1 -

2. FAP Adaptive Filter Without Relaxation

If relaxation of the first illustrative embodiment is eliminated (that is, if μ is set to one), considerable savings in complexity can be realized. A second illustrative embodiment of the present invention is directed to an FAP adaptive filter without relaxation. The filter process in accordance with the second embodiment is summarized as follows:

1 . Initialization: a 0 = 1 , 0 l and E a n =δ.

2. Use sliding windowed Fast Kalman or Fast Transversal Filters to update E a n and a n .

7. h n -hn-l +Xn-(N-l)EN-l,n

Here, steps 4 and 7 are still complexity L, step 3 is of complexity 2N, and steps 5 and 6 are of complexity N. Taking into account the sliding windowed FTF, the total complexity is 2L+14N.

FIG. 7 presents the second illustrative embodiment of the invention, namely, FAP adaptive filter without relaxation 600 in the context of the echo canceller application. The inputs, outputs of FAP adaptive filter without relaxation 600 and its arrangement within the echo canceller example are the same as described in FIG. 1. Elements of the embodiment of FIG. 7 which are identical to those of FIG. 1 have been labeled with the same reference numerals. The far-end or excitation signal, x n is applied to the well known sliding windowed fast recursive least squares (SW-FRLS) filter 625. SW-FRLS filter produces in response, the forward prediction coefficient vector a n and the forward

prediction error energy E a>n which in turn are applied to fast E n _ι and E N _ l n generator 630. Also applied to relaxed E n -ι and E N _ l n generator 630 is the return signal e n . Fast E n _ i and E N - ι, n generator 630 produces the scalar output E N - ι ,n and the vector output E n _ -.. The scalar E N _ 1>n is supplied to input β 3 of h n

5 generator and filter 145. The excitation signal x n is also applied to h n generator and filter 145 which produces the provisional echo estimate signal d n . Signal correlator 150 receives signal x n as input and produces in response correlation vector £ xx n . Vector r xx n is in turn applied to dot product generator 655. Vector E n _ ι from fast E n _ i and E N - 1 , n generator 630 is also supplied to dot product generator 655 whose 10 scalar output is supplied to summer 665 where it is added to provisional echo estimate d n producing echo estimate d n . Echo estimate d n is then subtracted from system response signal s n by summer 622, producing return signal e n . Return signal e n is returned to the far-end as output.

FIG. 8 shows the signal flow graph of fast E n - i and E N _ 1>n generator

15 630 of FIG. 7. The inputs to Fast E n _ι and EN- I . D are the expected forward prediction error energy E a n , the return signal e n , and the forward prediction vector a n . The expected forward prediction error energy E N _ι, n is supplied to inverter 705 which inverts E a n and supplies the output to multiplier 710. Multiplier 710 multiplies inverted E a>n with returned error signal e n . The output of multiplier 710

20 then is supplied to multipliers 715 which multiply the output of multiplier 710 with the N elements of forward prediction vector a n resulting in the N-length output vector A n . Vector A n vector elements A 0 , n through A N -ι, n . The N-l length vector E n _ι is formed from the first N-l elements of A n , A 0 , n through A N - 2 , n i the following manner. The first element of vector E n _ ι , Eo ,n - I is formed by

25 delaying A 0.n one sample period in sample delay memory element 720-0. Then elements E k n -ι where 0<k≤N-2, are formed by adding E k _ l n -ι to A k n at summer 725-k-l and applying the result to sample delay element 720-k (the output of which is E k>n _ι). Finally, output E - I. D s formed by adding A N -ι. n to EN-2, Π - I at summer 725-N-l.

30 3. FAP Adaptive Filter With Suspended Adaptation

Consider the case where the relaxation parameter, μ, is set to zero. From equation (2) it is seen that this is equivalent to suspending the coefficient adaptation. This is desirable in echo cancellation applications when a signal source, such as a near-end talker, is active in the echo path. Setting μ=0 in the first

illustrative embodiment obtains a third illustrative embodiment directed to suspended coefficient adaptation. The filter process associated with this third illustrative embodiment is summarized as follows:

Initialization: a 0 = h, O -b 0 = O 1 , lj . andE a>n =E b , n =δ.

2. Use sliding windowed Fast Kalman or Fast Transversal Filters to update E an , E b(n ,a n ,andb n .

4. d n =x_.h n

5. d n =d n , e n =s n -d n

HnHnin

10.h n =h n

Step 2 still has complexity ION when FTF is used. Steps 4 has complexity L, steps 3, 7, and 8 each have complexity 2N, and steps 5, and 9 have complexity N. The overall complexity is L+18N. Note that steps 5, 10, and 11 are simply "renaming" operations and step 3 could be eliminated. However, since suspended adaptation is usually only temporary, vector F-^ n must remain current to properly resume operation of, e.g., the first or second illustrative embodiments.

For the echo cancellation application, the first illustrative embodiment may be used when a relaxation parameter other than one (1) is chosen. When adaptation is to be inhibited due to the detection of a near-end signal, the relaxation parameter is merely set to zero during the duration of the near-end signal's presence. While the coefficient adaptation is inhibited, the production of the return signal, e n , continues.

Alternatively, if the relaxation parameter is chosen to be one during adaptation, the second illustrative embodiment may be used. Here, when a near-end signal is detected the echo canceller may switch to the third illustrative embodiment where as before, the coefficient adaptation is inhibited, but e„ continues to be produced. The advantage of using the second and third illustrative embodiments is lower computational complexity.

FIG. 9 presents the third illustrative embodiment of the invention, namely, FAP adaptive filter with suspended adaptation 800 in the context of an echo canceller application. The inputs, outputs of FAP adaptive filter with suspended adaptation 800 and its arrangement within the echo canceller example are the same as described in FIG. 1. As with the embodiment of FIG. 7, elements of the embodiment of FIG. 9 which are identical to those of FIG. 1 have been labeled with the same reference numerals. When near-end signal y n contains a near-end talker's speech, it is well known to those skilled in the art that in order to prevent divergence of the echo path estimate h n the echo canceller must suspend adaptation of the adaptive filter coefficients. During such periods, returned signal e n is still produced and sent to the far-end. Conventional devices known as near-end speech detectors, not shown in FIG. 1, provide echo cancellers with information that the near-end talker is active. Thus, adaptive filters must provide the means for the echo canceller to temporarily suspend adaptation of adaptive filter coefficients while simultaneously maintaining production of return signal e n during the presence of near-end speech activity. Also means must be provided for the adaptive filter to resume adaptation of the adaptive filter coefficients at the conclusion of near-end speech activity. For the FAP adaptive filter, this means that while adaptation of provisional echo estimate h n is suspended, production of return signal e n continues, along with the signals and vectors required to resume adaptation. One method to achieve suspended adaptation is simply to set the relaxation parameter μ to zero in FAP adaptive filter with relaxation 100. FIG. 9 shows a preferred embodiment, however, since it has lower computational complexity.

In FAP adaptive filter with suspended adaptation 800, the far-end or excitation signal, x n is applied to the well known sliding windowed fast recursive least squares (SW-FRLS) filter 125. SW-FRLS filter 125 produces the forward prediction coefficient vector a n , the forward prediction error energy E a n , the backward prediction coefficient vector b n , and the backward prediction error energy E b n in response. These turn are applied to relaxed E Ω _ι and E N ,n generator 130. Also applied to relaxed !„_ ! and E N ,n generator 130 is parameter 1 -μ (where μ is set to zero) and the return signal vector e n which is produced by e n generator 135. Relaxed E n _ i and E N - 1 , n generator 130 produces the scalar output E N - 1 ,n and the vector output E n _ j which must be maintained current while adaptation is suspended.

The excitation signal x n is supplied to h n filter 326 which produces the provisional echo estimate signal d n which is aliased to signal d n . Signal correlator 150 receives signal x n as input and produces in response correlation vector πι „ which must be maintained current while adaptation is suspended. Echo estimate d n is subtracted from system response signal s n by summer 822, producing return signal e n . Return signal e n is returned to the far-end as output and is also supplied to e n generator 135. The scalar quantity 1 - μ (where μ is set to zero) is also supplied to e n generator 135 which produces the return signal vector e n for input into relaxed E „_ i and E N _ ι ιD generator 130.

C. Discussion

In each of the three illustrative embodiments presented above, long delay lines of the excitation signal x n are shown in multiple places. The person of ordinary skill in the art will recognize that certain efficiencies of implementation may be obtained by use of one or more shared delay lines for x n .