Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
APPARATUS AND METHOD FOR MULTICHANNEL INTERFERENCE CANCELLATION
Document Type and Number:
WIPO Patent Application WO/2018/193028
Kind Code:
A1
Abstract:
An apparatus for multichannel interference cancellation in a received audio signal comprising two or more received audio channels to obtain a modified audio signal comprising two or more modified audio channels is provided. The apparatus comprises a first filter unit (112) being configured to generate a first estimation of a first interference signal depending on a reference signal. Moreover, the apparatus comprises a first interference canceller (114) being configured to generate a first modified audio channel of the two or more modified audio channels from a first received audio channel of the two or more received audio channels depending on the first estimation of the first interference signal. Furthermore, the apparatus comprises a second filter unit (122) being configured to generate a second estimation of a second interference signal depending on the first estimation of the first interference signal. Moreover, the apparatus comprises a second interference canceller (124) being configured to generate a second modified audio channel of the two or more modified audio channels from a second received audio channel of the two or more received audio channels depending on the second estimation of the second interference signal.

Inventors:
LUIS VALERO, Maria (Wurzelbauerstraße 2, Nürnberg, 90409, DE)
HABETS, Emanuel (Schwedenstraße 13, Spardorf, 91080, DE)
ANNIBALE, Paolo (Friedrich-Bauer-Str. 36, Erlangen, 91058, DE)
LOMBARD, Anthony (Vogelherd 92, Erlangen, 91058, DE)
WILD, Moritz (Kleinreuther Weg 45, Nürnberg, 90408, DE)
RUTHA, Marcel (Am Sunder 31a, Ingolstadt, 85051, DE)
Application Number:
EP2018/060006
Publication Date:
October 25, 2018
Filing Date:
April 19, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
FRAUNHOFER-GESELLSCHAFT ZUR FÖRDERUNG DER ANGEWANDTEN FORSCHUNG E.V. (Hansastraße 27c, München, 80686, DE)
FRIEDRICH-ALEXANDER-UNIVERSITAET ERLANGEN-NUERNBERG (Schlossplatz 4, Erlangen, 91054, DE)
International Classes:
H04R3/02; H04M9/08
Foreign References:
EP2890154A12015-07-01
US20130301840A12013-11-14
EP2574082A12013-03-27
US20140334620A12014-11-13
Other References:
SEN M KUO ET AL: "MULTIPLE-MICROPHONE ACOUSTIC ECHO CANCELLATION SYSTEM WITH THE PARTIAL ADAPTIVE PROCESS", DIGITAL SIGNAL PROCESSING, ACADEMIC PRESS, ORLANDO,FL, US, vol. 3, no. 1, 1 January 1993 (1993-01-01), pages 54 - 63, XP000361272, ISSN: 1051-2004, DOI: 10.1006/DSPR.1993.1007
E. HANSLER; G. SCHMIDT: "Acoustic Echo and Noise Control: A practical Approach", 2004, WILEY
S. HAYKIN: "Adaptive Filter Theory", 2001, PRENTICE-HALL
W. KELLERMANN: "Strategies for combining acoustic echo cancellation and adaptive beamforming microphone arrays", PROC. IEEE ICASSP, April 1997 (1997-04-01), pages 219 - 222, XP010226174, DOI: doi:10.1109/ICASSP.1997.599608
O. SHALVI; E. WEINSTEIN: "System identification using nonstationary signals", IEEE TRANS. SIGNAL PROCESS., vol. 44, no. 8, 1996, pages 2055 - 2063, XP011057527
S. GANNOT; D. BURSHTEIN; E. WEINSTEIN: "Signal enhancement using beamforming and nonstationarity with applications to speech", IEEE TRANS. SIGNAL PROCESS., vol. 49, no. 8, August 2001 (2001-08-01), pages 1614 - 1626, XP011059366
I. COHEN: "Relative transfer function identification using speech signals", IEEE TRANS. SPEECH AUDIO PROCESS., vol. 12, no. 5, September 2004 (2004-09-01), pages 451 - 459, XP011116323, DOI: doi:10.1109/TSA.2004.832975
R. TALMON; I. COHEN; S. GANNOT: "Relative transfer function identification using convolutive transfer function approximation", IEEE TRANS. AUDIO, SPEECH, LANG. PROCESS., vol. 17, no. 4, May 2009 (2009-05-01), pages 546 - 555, XP011253712, DOI: doi:10.1109/TASL.2008.2009576
G. REUVEN; S. GANNOT; I. COHEN: "Joint noise reduction and acoustic echo cancellation using the transfer-function generalized sidelobe canceller", SPEECH COMMUNICATION, vol. 49, no. 7-8, August 2007 (2007-08-01), pages 623 - 635, XP022146164, DOI: doi:10.1016/j.specom.2006.12.008
R. TALMON; I. COHEN; S. GANNOT: "Convolutive transfer function generalized sidelobe canceler", IEEE TRANS. AUDIO, SPEECH, LANG. PROCESS., vol. 17, no. 7, September 2009 (2009-09-01), pages 1420 - 1434, XP011271223, DOI: doi:10.1109/TASL.2009.2020891
T. DVORKIND; S. GANNOT: "Speaker localization in a reverberant environment", PROC. THE 22ND CONVENTION OF ELECTRICAL AND ELECTRONICS ENGINEERS IN ISRAEL (IEEEI, December 2002 (2002-12-01), pages 7 - 7
T. G. DVORKIND; S. GANNOT: "Time difference of arrival estimation of speech source in a noisy and reverberant environment", SIGNAL PROCESSING, vol. 85, no. 1, January 2005 (2005-01-01), pages 177 - 204, XP004656866, DOI: doi:10.1016/j.sigpro.2004.09.014
X. LI; L. GIRIN; R. HORAUD; S. GANNOT: "Estimation of the direct-path relative transfer function for supervised sound-source localization", IEEE TRANS. AUDIO, SPEECH, LANG. PROCESS., vol. 4, no. 11, November 2016 (2016-11-01), pages 2171 - 2186, XP011621706, DOI: doi:10.1109/TASLP.2016.2598319
C. YEMDJI; M. MOSSI IDRISSA; N. EVANS; C. BEAUGEANT; P. VARY: "Dual channel echo postfiltering for hands-free mobile terminals", PROC. IWAENC, September 2012 (2012-09-01), pages 1 - 4
W. KELLERMANN: "Joint design of acoustic echo cancellation and adaptive beamforming for microphone arrays", PROC. INTL. WORKSHOP ACOUST. ECHO NOISE CONTROL (IWAENC, 1997, pages 81 - 84
W. HERBORDT; W. KELLERMANN: "GSAEC - acoustic echo cancellation embedded into the generalized sidelobe canceller", PROC. EUROPEAN SIGNAL PROCESSING CONF. (EUSIPCO, vol. 3, September 2000 (2000-09-01), pages 1843 - 1846
W. HERBORDT; W. KELLERMANN; S. NAKAMURA: "Joint optimization of LCMV beamforming and acoustic echo cancellation", PROC. EUROPEAN SIGNAL PROCESSING CONF. (EUSIPCO, September 2004 (2004-09-01), pages 2003 - 2006, XP032760711
K.-D. KAMMEYER; M. KALLINGER; A. MERTINS: "New aspects of combining echo cancellers with beamformers", PROC. IEEE ICASSP, vol. 3, March 2005 (2005-03-01), pages 137 - 140, XP010792348, DOI: doi:10.1109/ICASSP.2005.1415665
Y. AVARGEL; I. COHEN: "Adaptive system identification in the short-time fourier transform domain using cross-multiplicative transfer function approximation", IEEE TRANS. AUDIO, SPEECH, LANG. PROCESS., vol. 6, no. 1, January 2008 (2008-01-01), pages 162 - 173, XP011197515, DOI: doi:10.1109/TASL.2007.910789
"System identification in the short-time Fourier transform domain with crossband filtering", IEEE TRANS. AUDIO, SPEECH, LANG. PROCESS., vol. 15, no. 4, May 2007 (2007-05-01), pages 1305 - 1319
"On multiplicative transfer function approximation in the short-time fourier transform domain", IEEE SIGNAL PROCESS. LETT., vol. 14, no. 5, May 2007 (2007-05-01), pages 337 - 340
I. COHEN: "Speech enhancement using a noncausal a priori SNR estimator", IEEE SIGNAL PROCESS. LETT., vol. 11, no. 9, September 2004 (2004-09-01), pages 725 - 728, XP011117262, DOI: doi:10.1109/LSP.2004.833478
J. B. ALLEN; D. A. BERKLEY: "Image method for efficiently simulating small-room acoustics", J. ACOUST. SOC. AM., vol. 65, no. 4, April 1979 (1979-04-01), pages 943 - 950
P. C. W. SOMMEN: "Partitioned frequency-domain adaptive filters", PROC. ASILOMAR CONF. ON SIGNALS, SYSTEMS AND COMPUTERS, 1989, pages 677 - 681, XP000217206
J. J. SHYNK: "Frequency-domain and multirate adaptive filtering", IEEE SIGNAL PROCESS. MAG., vol. 9, no. 1, January 1992 (1992-01-01), pages 14 - 37, XP000436314, DOI: doi:10.1109/79.109205
S. HAYKIN: "Adaptive Filter Theory", 2002, PRENTICE-HALL
M. DENTINO; J. MCCOOL; B. WIDROW: "Adaptive filtering in the frequency domain", PROC. OF THE IEEE, vol. 66, no. 12, December 1978 (1978-12-01), pages 1658 - 1659, XP002666092, DOI: doi:10.1109/PROC.1978.11177
G. A. CLARK; S. R. PARKER; S. K. MITRA: "A unified approach to time- and frequency-domain realization of FIR adaptive digital filters", IEEE TRANS. ACOUST., SPEECH, SIGNAL PROCESS., vol. 31, no. 5, October 1983 (1983-10-01), pages 1073 - 1083, XP002123672
A. OPPENHEIM; R. W. SCHAFER: "Digital Signal Processing", 1993, PRENTICE-HALL INC.
R. M. M. DERKX; G. P. M. ENGELMEERS; P. C. W. SOMMEN: "New constraining method for partitioned block frequency-domain adaptive filters", IEEE TRANS. SIGNAL PROCESS., vol. 50, no. 3, 2002, pages 2177 - 2186
Attorney, Agent or Firm:
SCHAIRER, Oliver et al. (Schoppe, Zimmermann Stöckeler, Zinkler, Schenk & Partner mb, Radlkoferstr. 2 Müchen, 81373, DE)
Download PDF:
Claims:
Claims

1. An apparatus for multichannel interference cancellation in a received audio signal comprising two or more received audio channels to obtain a modified audio signal comprising two or more modified audio channels, wherein the apparatus comprises: a first filter unit (1 12; 312; 512) being configured to generate a first estimation of a first interference signal depending on a reference signal, a first interference canceller (114; 314; 514) being configured to generate a first modified audio channel of the two or more modified audio channels from a first received audio channel of the two or more received audio channels depending on the first estimation of the first interference signal, a second filter unit (122; 322; 522) being configured to generate a second estimation of a second interference signal depending on the first estimation of the first interference signal, and a second interference canceller (124; 324; 524) being configured to generate a second modified audio channel of the two or more modified audio channels from a second received audio channel of the two or more received audio channels depending on the second estimation of the second interference signal.

2. An apparatus according to claim 1 , wherein the first estimation of the first interference signal is a first estimation of a first acoustic echo signal, wherein the second estimation of the second interference signal is a second estimation of a second acoustic echo signal, wherein the first interference canceller (114; 314; 514) is configured to conduct acoustic echo cancellation on the first received audio channel to obtain the first modified audio channel, and wherein the second interference canceller (124; 324; 524) is configured to conduct acoustic echo cancellation on the second received audio channel to obtain the second modified audio channel. 3. An apparatus according to claim 1 or 2, wherein the two or more received audio channels and the two or more modified audio channels are channels of a transform domain, and wherein the reference signal and the first and second interference signals are signals of the transform domain. 4. An apparatus according to one of the preceding claims, wherein the two or more received audio channels and the two or more modified audio channels are channels of a short-time Fourier transform domain, and wherein the reference signal and the first and second interference signals are signals of the short-time Fourier transform domain.

5. An apparatus according to one of the preceding claims, wherein the second filter unit (122; 322; 522) is configured to determine a filter configuration depending on the first estimation of the first interference signal and depending on the second received audio channel, and wherein the second filter unit (122; 322; 522) is configured to determine the second estimation of the second interference signal depending on the first estimation of the first interference signal and depending on the filter configuration.

6. An apparatus according to claim 5, wherein the second filter unit (122; 322; 522) is configured to determine the filter configuration by minimizing a cost function or by minimizing an error criterion.

7. An apparatus according to claim 5 or 6, wherein the second filter unit (122; 322; 522) is configured to determine the filter configuration according to

where in is a covariance matrix of

wherein is the cross-correlation vector between

wherein indicates the first estimation of the first interference signal,

wherein indicates the second received audio channel,

wherein ί denotes a time index, and wherein k indicates a frequency index.

8. An apparatus according to one of claims 1 to 4, wherein the second filter unit (122; 322; 522) is configured to determine the filter configuration for a second time index depending on the filter configuration for a first time index that precedes the second time index in time, depending on the first estimation of the first interference signal for the first time index, and depending on a sample of the second modified audio channel for the first time index.

9. An apparatus according to claim 8, wherein the second filter unit (122; 322; 522) is configured to determine the filter configuration for the second time index according to wherein indicates the second time index, and wherein indicates the first time index, and wherein k indicates a frequency index, wherein is the filter configuration for the second time index, and wherein is the filter configuration for the first time index,

wherein is the first estimation of the first interference signal for the first

time index, wherein is the second modified audio channel for the first time index, wherein is a step-size matrix.

10. An apparatus according to one of claims 1 to 3, wherein the two or more received audio channels and the two or more modified audio channels are channels of a partitioned-block frequency domain, wherein each of the two or more received audio channels and the two or more modified audio channels comprises a plurality of partitions, and wherein the reference signal and the first and the second interference signals are signals of the partitioned-block frequency domain, wherein each of the reference signal and the first and the second interference signals comprises a plurality of partitions. 1 1. An apparatus according to claim 10, wherein the second filter unit (122; 322; 522) is configured to determine a filter configuration depending on the first estimation of the first interference signal and depending on the second received audio channel, wherein the second filter unit (122; 322; 522) is configured to determine the second estimation of the second interference signal depending on the first estimation of the first interference signal and depending on the filter configuration, wherein the second filter unit (122; 322; 522) is configured to determine the filter configuration for a second time index depending on the filter configuration for a first time index that precedes the second time index in time, depending on the first estimation of the first interference signal for the first time index, and depending on a sample of the second modified audio channel for the first time index.

12. An apparatus according to claim 1 1 , wherein the second filter unit (122; 322; 522) is configured to determine the filter configuration in the partitioned-block frequency domain according to wherein indicates the second time index, and wherein £ indicates the first

time index, and wherein k indicates a frequency index, wherein is the filter configuration for the second time index, and wherein

is the filter configuration for the first time index,

wherein is the first estimation of the first interference signal for the first time

index, wherein C„ is a step-size matrix, wherein is the second modified audio channel for the first time index, and wherein is a circular convolution constraining matrix.

13. An apparatus according to one of the preceding claims, wherein the received audio signal comprises three or more received audio channels, and wherein the modified audio signal comprises three or more modified audio channels, wherein the apparatus further comprises a third filter unit (132) and a third interference canceller (134), wherein the third filter unit (132) is configured to generate a third estimation of a third interference signal depending on at least one of the first estimation of the first interference signal and the second estimation of the second interference signal, wherein third interference canceller (134) is configured to generate a third modified audio channel e3 (t) of the three or more modified audio channels from a third received audio channel y3 (t) of the three or more received audio channels depending on the third estimation of the third interference signal.

14. A method for multichannel interference cancellation in a received audio signal comprising two or more received audio channels to obtain a modified audio signal comprising two or more modified audio channels, wherein the method comprises: generating a first estimation of a first interference signal depending on a reference signal, generating a first modified audio channel of the two or more modified audio channels from a first received audio channel of the two or more received audio channels depending on the first estimation of the first interference signal, generating a second estimation of a second interference signal depending on the first estimation of the first interference signal, and generating a second modified audio channel of the two or more modified audio channels from a second received audio channel of the two or more received audio channels depending on the second estimation of the second interference signal. 15. A computer program for implementing the method of claim 14 when being executed on a computer or signal processor.

Description:
Apparatus and Method for Multichannel Interference Cancellation

Description

The present invention relates to audio signal processing, and, in particular, to an apparatus and method for reducing the complexity of multichannel interference cancellation and for low complexity multichannel interference cancellation. Modern hands-free communication devices employ multiple microphone signals, for, e.g., speech enhancement, room geometry inference or automatic speech recognition. These devices range from voice-activated assistants, smart-home devices and smart speakers, to smart phones, tablets or personal computers. Many smart devices, such as voice- activated assistants, smart-phones, tablets or personal computers, are equipped with loudspeakers. Considering such devices, for example, a device which also integrates at least one loudspeaker, an acoustic interference canceler is applied to each microphone's output to reduce the electroacoustic coupling.

Acoustic echo cancellation (AEC) (see, e.g., [1]) is the most widely-used technique to reduce electro-acoustic coupling between ioudspeaker(s) and microphone(s) in hands- free communication set-ups. Given such a set-up, microphones acquire, in addition to the desired near-end speech, acoustic echoes and background noise. AEC uses adaptive filtering techniques (see, e.g., [2]) to estimate the acoustic impulse responses (AIRs) between loudspeaker(s) and microphone(s). Subsequently, acoustic echo estimates are computed by filtering the available loudspeaker signal with the estimated AIRs. Finally, the estimated acoustic echoes are subtracted from the microphone signals, resulting in the cancellation of acoustic echoes.

In the particular case of acoustic echo cancellation (AEC), electro-acoustic coupling is caused by the far-end speaker signal that is reproduced by the loudspeaker. Yet, in the aforementioned hands-free communication devices, it can also be caused by the device's own feedback, music, or the voice assistant. The most straightforward solution to reduce the electro-acoustic coupling between loudspeaker and microphones is to place an acoustic interference canceler at the output of each microphone (see, e.g., [3]). Relative transfer functions model the relation between frequency-domain AIRs, commonly denoted as acoustic transfer functions (ATFs). RTFs (RTF means relative transfer function) are commonly used in the context of multi-microphone speech enhancement (see, e.g., [5], [8], [12]). Considering more related applications, the estimation of residual echo relative transfer functions was employed in [13], [14] to estimate the power spectral density of the residual echo, e.g., the acoustic echo components that remain after cancellation, of a primary channel. To enhance the estimation process, a second microphone signal is used. The proposed method in [13], [14] estimates the relation between the primary signal after cancellation and a secondary microphone signal, providing a relation between the error in the estimation of the primary AIR and a secondary AIR. Finally, the residual echo relative transfer function is used to compute the power spectral density of the primary residual acoustic echo.

Considering the specific application of microphone array processing, several methods have been presented which aim at the complexity reduction of the entire speech enhancement algorithms, e.g. spatial filtering combined with AEC. For example, the use of a single AEC placed at the output of the spatial filter was first studied in [3], [15]. Some alternative methods which aim at integrating acoustic echo cancellation and microphone array processing have been proposed in [8], [16], [18].

As the complexity of a multi-microphone acoustic interference canceler is proportional to the number of microphones, for many modern devices, such an increment in complexity is not attainable. It would therefore be highly appreciated, if low complexity concepts for multichannel interference cancellation would be provided.

The object of the present invention is to provide low complexity concepts for multichannel interference cancellation. The object of the present invention is solved by an apparatus according to claim 1 , by a method according to claim 14 and by a computer program according to claim 15.

An apparatus for multichannel interference cancellation in a received audio signal comprising two or more received audio channels to obtain a modified audio signal comprising two or more modified audio channels according to an embodiment is provided. The apparatus comprises a first filter unit being configured to generate a first estimation of a first interference signal depending on a reference signal.

Moreover, the apparatus comprises a first interference canceller being configured to generate a first modified audio channel of the two or more modified audio channels from a first received audio channel of the two or more received audio channels depending on the first estimation of the first interference signal.

Furthermore, the apparatus comprises a second filter unit being configured to generate a second estimation of a second interference signal depending on the first estimation of the first interference signal.

Moreover, the apparatus comprises a second interference canceller being configured to generate a second modified audio channel of the two or more modified audio channels from a second received audio channel of the two or more received audio channels depending on the second estimation of the second interference signal.

Embodiments provide concepts, e.g., an apparatus and a method, for multichannel interference cancellation using relative transfer functions.

For example for AEC, concepts according to embodiments use an estimate of a primary acoustic echo signal to compute estimates of the remaining, or secondary, acoustic echo signals. In order to do so, the relation between primary acoustic impulse responses (AIRs), e.g., the AIRs between the loudspeaker and the primary microphones, and secondary AIRs, e.g., the AIRs between the loudspeaker and secondary microphones, are identified. Subsequently, the secondary acoustic echo signals are computed by filtering a primary acoustic echo signal with the estimated relation between AIRs. Finally, cancellation is applied to each and every microphone signal. If the inter-microphone distance is small, these relations can be modeled using relatively short filters. Thus, the computational complexity can be reduced.

Moreover, a method for multichannel interference cancellation in a received audio signal comprising two or more received audio channels to obtain a modified audio signal comprising two or more modified audio channels according to an embodiment is provided.

The method comprises: Generating a first estimation of a first interference signal depending on a reference signal. - Generating a first modified audio channel of the two or more modified audio channels from a first received audio channel of the two or more received audio channels depending on the first estimation of the first interference signal.

Generating a second estimation of a second interference signal depending on the first estimation of the first interference signal. And:

Generating a second modified audio channel of the two or more modified audio channels from a second received audio channel of the two or more received audio channels depending on the second estimation of the second interference signal.

Furthermore, a computer program is provided, wherein the computer program is configured to implement the above-described method when being executed on a computer or signal processor. In the following, embodiments of the present invention are described in more detail with reference to the figures, in which: Fig. 1 a illustrates an apparatus for for multichannel interference cancellation according to an embodiment, illustrates an apparatus for for multichannel interference cancellation according to another embodiment, illustrates an apparatus for for multichannel interference cancellation according to a further embodiment, illustrates Multi-microphone AEC,

Fig. 3 illustrates multi-microphone AEC according to an embodiment, Fig. 4 illustrates multi-microphone AEC in the STFT domain, Fig. 5 illustrates multi-microphone AEC in the STFT domain according to an embodiment, Fig. 6 depicts the results corresponding to the simulations with truncated AIRs,

Fig. 7 depicts a comparison between AETF and RETF-based AEC with

T 60 = 0.15s and L = 256 taps, and Fig. 8 illustrates a comparison between AETF and RETF-based AEC with

Γ 60 = 0.35s and L = 1024 taps.

Fig. 1 a illustrates an apparatus for multichannel interference cancellation according to an embodiment.

The apparatus comprises a first filter unit 1 12 being configured to generate a first estimation of a first interference signal depending on a reference signal x(t) .

Moreover, the apparatus comprises a first interference canceller 1 14 being configured to generate a first modified audio channel e 1 (t) of the two or more modified audio channels from a first received audio channel y x (t) of the two or more received audio channels depending on the first estimation ) of the first interference signal.

Furthermore, the apparatus comprises a second filter unit 122 being configured to generate a second estimation of a second interference signal depending on the first

estimation of the first interference signal.

Moreover, the apparatus comprises a second interference canceller 124 being configured to generate a second modified audio channel 0 of the two or more modified audio

channels from a second received audio channel ) of the two or more received audio channels depending on the second estimation of the second interference signal.

Embodiments are based on the finding that the first estimation of the first interference signal may be used to generate the second estimation of a second interference signal. Reusing the first estimation of the first interference signal for determining the second estimation of the second interference signal reduces computational complexity compared to solutions that generate the second estimation of the second interference signal by using the reference signal instead of using the first estimation of the first interference signal.

Some of the embodiments relate to acoustic echo cancellation (AEC).

In an embodiment, the first estimation of the first interference signal may, e.g., be a first estimation of a first acoustic echo signal, the second estimation of the second interference signal is a second estimation of a second acoustic echo signal. The first interference canceller 114 may, e.g., be configured to conduct acoustic echo cancellation on the first received audio channel (e.g., by subtracting the first estimation of the first acoustic echo signal from the first received audio channel) to obtain the first modified audio channel. The second interference canceller 124 may, e.g., be configured to conduct acoustic echo cancellation on the second received audio channel (e.g., by subtracting the second estimation of the second acoustic echo signal from the second received audio channel) to obtain the second modified audio channel. Fig. 1 b illustrates an apparatus for for multichannel interference cancellation according to another embodiment.

Compared to the apparatus of Fig. 1 a, the apparatus of Fig. 1 b further comprises a third filter unit 132 and a third interference canceller 134.

In the embodiment of Fig. 1 b, the received audio signal comprises three or more received audio channels, and the modified audio signal comprises three or more modified audio channels. The third filter unit 132 is configured to generate a third estimation of a third

interference signal depending on the first estimation of the first interference signal.

The third interference canceller 134 is configured to generate a third modified audio channel e 3 (t) of the three or more modified audio channels from a third received audio channel y 3 (t) of the three or more received audio channels depending on the third estimation of the third interference signal.

Fig. 1c illustrates an apparatus for for multichannel interference cancellation according to a further embodiment.

Compared to the apparatus of Fig. 1a, the apparatus of Fig. 1c further comprises a third filter unit 132 and a third interference canceller 134. In the embodiment of Fig. 1c, the received audio signal comprises three or more received audio channels, and the modified audio signal comprises three or more modified audio channels.

The third filter unit 132 is configured to generate a third estimation of a third

interference signal depending on the second estimation of the second interference

signal. Thus, the embodiment of Fig. 1c differs from the embodiment of Fig. 1b in that generating the third estimation of the third interference signal is conducted depending on the second estimation of the second interference signal instead of depending on the first estimation of the first interference signal.

The third interference canceller 134 is configured to generate a third modified audio channel e 3 (t) of the two or more modified audio channels from a third received audio channel y 3 (t) of the two or more received audio channels depending on the third estimation ) of the third interference signal.

In other embodiments (which implement the optional dashed line 199 in Fig. 1 c), the third filter unit 132 is configured to generate a third estimation of a third interference signal depending on the second estimation of the second interference signal and depending on the first estimation of the first interference signal.

Fig. 2 illustrates multi-microphone AEC according to the prior art. In that prior art approach, a first filter unit 282 is used to generate a first estimation of a first

interference signal from a reference signal .

A first interference canceller 284 then generates a first modified audio channel e, (t) from a first received audio channel y x (t) of the two or more received audio channels depending on the first estimation of the first interference signal.

In the prior art approach of Fig. 2, a second filter unit 292 generates a second estimation of a second interference signal from the reference signal x(t) that was also used

by the first filter unit 282.

A second interference canceller 294 then generates a second modified audio channel e N (t) from a second received audio channel y N (t) of the two or more received audio channels depending on the second estimation of the second interference signal.

Some embodiments reduce the complexity of multi-microphone Acoustic Echo Cancellation (AEC) that is depicted in Fig. 2, by using a relative transfer function (RTF) based approach, as depicted in Fig. 3. Relative transfer functions are described in [4], [7].

Fig. 3 illustrates multi-microphone Acoustic Echo Cancellation (AEC) according to embodiments. In Fig. 3, a first filter unit 312 is used to generate a first estimation of a first interference signal from a reference signal x{t) .

A first interference canceller 314 then generates a first modified audio channel e x (t) from a first received audio channel y x (t) of the two or more received audio channels depending on the first estimation of the first interference signal.

The apparatus of Fig. 3 now differs from Fig. 2 in that, a second filter unit 322 generates a second estimation of a second interference signal depending on the first estimation of the first interference signal that was generated by the first filter unit 312.

A second interference canceller 324 then generates a second modified audio channel e N (t) from a second received audio channel y N if) of the two or more received audio channels depending on the second estimation of the second interference signal.

Some embodiments reduce the complexity of multi-microphone Acoustic Echo Cancellation (AEC) that is depicted in Fig. 2, by using a relative transfer function (RTF) based approach, as depicted in Fig. 3. Relative transfer functions are described in [4], [7].

Embodiments use an estimate of a primary interference signal, to compute estimates of the remaining, or secondary, interference signals. To estimate a primary interference signal, a primary filter, which characterizes the relation between a reference signal and a primary received signal, is identified. An estimate of a primary interference signal is then obtained by filtering a reference signal with an estimate of a primary filter. Afterwards, the secondary filters, e.g. the filters that characterize the relation between an estimated primary interference signal and the secondary received signals, are identified. Subsequently, estimates of the secondary interference signals are computed by filtering an estimate of a primary interference signal with the estimated secondary filters. Finally, cancellation is applied to reduce the electro-acoustic coupling. If the distance between microphones is small, secondary filters are shorter than primary filters (see, e.g., [10], [19]), which leads to the reduction of the computational complexity.

Some embodiments are used for acoustic echo cancellation. To this end, Fig. 3 depicts a hands-free communication scenario with one loudspeaker (one transmitter) and N microphones (receivers). In this particular case, the reference signal is the loudspeaker signal x(t), the primary microphone signal is y^t), without loss of generality, and / denotes the discrete time index. Further, an estimate of the primary filter is denoted as being an estimate of the primary acoustic echo (interference) signal and the

signal after cancellation As it can be observed, a secondary acoustic

echo signal is computed by filtering an estimate of the primary acoustic echo signal with an estimate of a secondary filter It should be noted that a delay of

D≥ 0 samples is introduced to the secondary microphone signal. This is done to ensure that D non-causal coefficients of the secondary filters are estimated. In case the microphones need to be synchronized, the primary signal after cancellation also needs to be delayed by D samples. In contrast, classical interference cancellation schemes (as depicted in Fig. 2), compute estimates of the N received signals by filtering the reference x(t) signal with N estimated primary filters.

In the following, a step-by-step approach according to some of the embodiments is provided:

1.) A primary interference signal is estimated using a reference signal. In the specific application of acoustic echo cancellation, the former is the acoustic echo signal, and the latter is the loudspeaker signal. To do so:

1.1. ) a primary filter, which characterizes the relation between a reference signal and a primary receiver signal, this being either

(a) a single receiver signal,

(b) a linear combination of receiver signals, is identified by using, e.g., adaptive filtering techniques.

1.2. ) a reference signal is filtered with an estimate of a primary filter to compute an estimate of a primary interference signal.

1.3. ) interference cancellation is applied by subtracting an estimate of a primary interference signal from a primary received signal, this being either

(a) a single receiver signal.

(b) a linear combination of receiver signals. 2. ) A secondary interference signal is estimated based on an estimate of a primary interference signal. To do so:

2.1. ) a secondary filter, which characterizes the relation between an estimate of a primary interference signal and a secondary received signal, is identified by, e.g., i. ) optimization of a cost-function or error criterion (e.g. mean- squared error, (weighted) least-squared error, etc..) ii. ) an adaptive filtering technique in time, frequency or sub-band domain. using a secondary receiver signal or a secondary signal after cancellation, and an estimate of a primary interference signal. (The secondary filter may, e.g., be considered as a filter configuration.)

2.2. ) an estimate of a primary interference signal is filtered with an estimate of a secondary filter, to compute an estimate of a secondary interference signal.

2.3. ) interference cancellation is applied by subtracting an estimate of a secondary interference signal from a secondary receiver signal.

3. ) Repeat 2. for each secondary interference signal.

4. ) Repeat 1., 2. and 3. for each reference signal.

5. ) Where a transmitter is a loudspeaker and a receiver is a microphone.

6. ) Where an estimate of a secondary interference signal can be used as an estimate of a primary interference signal leading to a cascaded configuration.

7. ) Where for more than two receivers, subsets of receivers can be defined, each of them having a primary receiver. Further embodiments apply only some of the steps above and/or apply the steps in a different order.

In the following, embodiments which use STFT-domain adaptive filters are described (STFT means short-time Fourier transform).

Given a hands-free communication set-up with one loudspeaker and N microphones, the n -th microphone signal can be expressed in the STFT domain as where I and k are, respectively, the time frame and frequency indexes. Further,

is the near-end signal, which comprises near-end speech and background noise, and is the n -th acoustic echo. The latter is the result of the loudspeaker signal being propagated through the room, and acquired by the » -th microphone. Its

exact formulation in the STFT domain (see, e.g., [20]) is

where superscripts denote transpose and

conjugate transpose, respectively, and K is the transform length. Further, the 6 -th partition of the which is a vector

containing all frequency dependencies (AETF means

acoustic echo transfer function).

It should be noted that AETFs in the STFT domain, which are extensively analyzed in [20], are non-causal. Moreover, the number of partitions, or input frames, that are necessary to estimate L AIR coefficients is where R denotes the

frameshift between subsequent input frames. Due to the non-causality of the AETFs, look-ahead frames of are needed to compute the echo signals. Let us assume that the frequency selectivity of the STFT analysis and synthesis windows is sufficient such that the frequency dependencies can be neglected. In addition, for notational brevity, according to embodiments, it is assumed that a delay of B nc frames is introduced to the reproduction path as depicted in Fig. 4. In practice, the capturing path is commonly delayed instead, see, e.g., [7], [20].

The signals in Fig. 4 are signals in a transform domain. In particular, the signals in Fig. 4 are signals in the short-time Fourier transform domain (STFT domain). In Fig. 4, a first filter unit 482 is used to generate a first estimation of a first

interference signal from a reference signal X(l,k).

A first interference canceller 484 then generates a first modified audio channel

from a first received audio channel of the two or more received audio channels

depending on the first estimation of the first interference signal.

In the approach of Fig. 4, a second filter unit 492 generates a second estimation

of a second interference signal from the reference signal X(l,k) that was also used by the first filter unit 482.

A second interference canceller 494 then generates a second modified audio channel from a second received audio channel of the two or more received

audio channels depending on the second estimation of the second interference

signal.

Fig. 4 illustrates multi-microphone AEC in the STFT domain. In practice, the capturing path is commonly delayed instead, see, e.g., [7], [20]. Now, by using the convolutive transfer function (CTF) approximation (see, e.g., [7]) it is possible to write

where ·* denotes complex conjugation, and, for brevity, . Adaptive algorithms in AEC are driven by the error signal after cancellation, e.g., where is used to denote estimates,

and Superscript H indicates Hermitian. Most

adaptive filters used in AEC are of gradient descent type (see, e.g., [2]), thus a generic update equation is given by where is the step-size matrix of the adaptive filter, whose formulation depends

on the specific adaptive algorithm used.

In the following, the usage of relative echo transfer functions according to embodiments is described.

Due to computational complexity restrictions, the implementation of multiple-microphone AEC as depicted in Fig. 4 is not always feasible.

According to embodiments, it is proposed to reduce the complexity by using a RETF- based approach, as depicted in Fig. 5 (RETF means relative echo transfer function). Fig. 5 illustrates multi-microphone AEC in the STFT domain according to an embodiment.

Again, the signals in Fig. 5 are signals in a transform domain. In particular, the signals in Fig. 5 are signals in the short-time Fourier transform domain (STFT domain).

In Fig. 5, a first filter unit 512 is used to generate a first estimation of a first

interference signal from a reference signal X(l, k) . A first interference canceller 514 then generates a first modified audio channel

from a first received audio channel of the two or more received audio channels

depending on the first estimation of the first interference signal.

The apparatus of Fig. 5 now differs from Fig. 4 in that, a second filter unit 522 generates a second estimation of a second interference signal depending on the first estimation of the first interference signal that was generated by the first filter unit

512. A second interference canceller 524 then generates a second modified audio channel from a second received audio channel of the two or more received

audio channels depending on the second estimation of the second interference

signal. In embodiments, the second filter unit 122 may, e.g., be configured to determine a filter configuration depending on the first estimation of the first interference signal and depending on the second received audio channel, and the second filter unit 122 may, e.g., configured to determine the second estimation of the second interference signal depending on the first estimation of the first interference signal and depending on the filter configuration.

For example, the second filter unit 122 is configured to determine the filter configuration by minimizing a cost function or by minimizing an error criterion, for example, by minimizing a mean-square error.

In the following, such filter configurations to be determined, may, for example, be

and/or The problem formulation is derived assuming that the

filters are time-invariant, while the estimates are the ones that vary in time. A particular example for such an embodiment is provided in the following. Without loss of generality, the primary echo signal is denoted as - defined as in

(3). Under the previously made assumptions on the frequency dependencies, it is possible to write,

where A„{p,k) is the p -th partition of the » -th relative echo transfer function (RETF).

Provided that the distance between primary and secondary microphones is relatively small, it is possible to assume that the non-causal partitions of A„(p,k) V» are negligible. It is worth mentioning that a few non-causal time-domain coefficients are nevertheless modeled by A n (0,k) . Under this assumption, no look-ahead is needed, and, consequently, no additional delay is introduced.

Finally, using the CTF approximation leads to

where P is the number of RETF partitions.

As is not observable, according to embodiments, it is proposed to replace

in formula (7) that can be obtained using a state-of-the-art AEC. To

estimate according to embodiments, the error signal is minimized:

where is the w -th stacked vector of RETF

partitions, and The optimum filter in the mean- square-error sense, which is obtained by minimizing the quadratic cost-function s equal to where is the covariance matrix of is the cross-correlation

vector between e.g.,

where denotes mathematical expectation. It should be noted that under the assumption that Meaning

that models the relation between the estimated primary AETF and the n -th

secondary one. For instance, considering the trivial case B = P = 1 , with B„ c = 0 , e.g., the multiplicative transfer function approximation (see, e.g., [21]), given which the w -th estimated RETF is equal to that, once the primary acoustic echo canceler has converged, is equal to as

defined in (7).

Compared to the problem of estimating RTFs from noisy observations (see, e.g., [4], [7], [22]) in our formulation there is no additional bias due to noise components that are correlated across channels.

Moreover, as the loudspeaker signal is known, the implementation of voice activity detectors (VADs) to control the estimation process is greatly simplified. In contrast, a double-talk detector is needed due to the fact that in practice is approximated by and, consequently, the previously made

assumption on the statistical relation between does not always hold.

In the following, embodiments which use adaptive RETF estimation are provided.

In such embodiments, the second filter unit 522 of Fig. 5 may, e.g., be configured to determine the filter configuration for a second time index using a step-size matrix. For example, the second filter unit 522 of Fig. 5 may, e.g., be configured to determine the filter configuration depending on the filter configuration for a first time index that precedes the second time index in time, depending on the first estimation of the first interference signal for the first time index, and depending on a sample of the second modified audio channel for the first time index.

In particular embodiments, the second filter unit 522 may, e.g., be configured to determine the filter configuration for the second time index according to wherein indicates the second time index, and wherein indicates the first time index, and wherein k indicates a frequency index, wherein is the filter

configuration for the second time index, and wherein is the filter configuration for

the first time index, wherein is the first estimation of the first interference signal for

the first time index, wherein is the second modified audio channel for the first time index, and wherein is a step-size matrix (for example, an inverse of a

covariance matrix of

Described in more detail, adaptive filters can be used to track slowly time-varying RETFs. Due to the fact that is an estimate of the echo signal acquired by the primary microphone, it cannot be assumed to be uncorrelated across time. More precisely, the off- diagonals of are not negligible if the STFT windows are short, or if the overlap

between them is large. Taking this into consideration, Newton's method (see, e.g., [2]), ensures a fast and stable convergence towards the optimum filter. In (11 ), η is a fixed step-size that is used to control the adaptation process. In practice, the covariance matrix is approximated by averaging over time, e.g., by using a first-order recursive

filter: where time averages are denoted by T , and β is the forgetting factor. In the following, the performance evaluation is described.

To evaluate the proposed approach, three sets of experiments were conducted, for which the simulation set-up was designed as follows. Echo signals were generated by convolving a clean speech signal with simulated AIRs. The latter were generated using the image method (see, e.g., [23]) for a room of dimensions 3 x 4 x 2.5w 3 , and reverberation time T 60 = 0.15 and 0.35s . The length of the simulated AIRs was L = 4096 taps, at a sampling frequency of F s = \ 6kHz . The AIRs were generated for a set-up with two microphones and one loudspeaker. The baseline set-up used a distance between loudspeaker and primary microphone of /, = 10CTM , and between microphones of The distance between loudspeaker and secondary microphone was The impact of these parameters on the performance was also analyzed. To this end, Δ = 3cm and /, = 20cm were also evaluated.

The signals were transformed to the STFT domain using Hamming analysis and synthesis windows of length K = 512 with 75% overlap, thus R = 128 samples. The adaptive algorithm used to estimate both the AETFs (5) and RETFs (11) was Newton's method.

Thus, the stepsize matrix in (5) As it is realistic to assume that

the loudspeaker signal is uncorrelated across time, its covariance matrix was simplified by: where Θ denotes element-wise multiplication, and I is the B x B identity matrix. It should be noted that in spite of this simplification, the normalization factors are still partition- dependent.

The step-size factors were μ = 0.5 / B and η = 0.225 / P , and the forgetting factor was β = 0.9. Further, the adaptive filters and covariance matrices were not updated during speech pauses, and regularization was used to ensure the non-singularity of the covariance matrices. Finally, white Gaussian noise was added to the microphone signals to simulate a fixed segmental echoto-noise ratio (SegENR). To make the differences in performance noticeable, a SegENR of 60 dB was used. Three sets of experiments were conducted:

1. The AIRs generated to simulate T 6Q = 0.155· were truncated to length 256 taps, and used to generate the echo signals. The length of the estimated primary AIR was L = 256.

2. Simulated environment with Τ ω = 0.15* , being the length of the estimated primary AIR L = 256 taps.

3. Simulated environment with Γ 60 = 0.35J , being the length of the estimated primary AIR L = 1024 taps.

It should be recalled that the number of AETF partitions that are necessary to completely estimate L AIR coefficients is , thus at least K subsequent filter

coefficients are partially estimated as well

In all simulations, B partitions of the primary AETF were estimated, while the secondary AETFs and RETFs were estimated using different number of partitions B„ c < B'≤B and P , respectively. The secondary echo signals were then obtained by convolving in the STFT domain the secondary AETFs with the loudspeaker signal, and the RETFs with the estimated primary echo signal. The echo return loss enhancement (ERLE) was used to measure the echo reduction in the secondary channel, with

where is the / 2 -norm, and is the

I -th frame of the secondary acoustic echo in the time domain. The outcome of these simulations is depicted in Figs. 5 to 7, where the ERLE measures were averaged over 60 frames for clarity. In these, the proposed RETF-based AEC is compared to state-of-the-art AEC using B and B' = i?„ c + P partitions for the AETF estimation. The latter condition is included to show a comparison with AETF-based AEC using fewer causal CTF partitions, which would also reduce the overall computational complexity.

Fig. 6 depicts the results corresponding to the simulations with truncated AIRs. In particular, Fig. 6 depicts a comparison between AETF and RETF-based AEC with truncated AIRs and L = 256 taps. The echo reduction obtained with P = 1 and 2, left and right sub-figures, are shown for all conditions under test. It can be observed that for P = 1 , the RETF-based approach converges to a higher ERLE value than the AETF-based one with B' partitions, e.g., with only P causal partitions.

Further, the performance is only moderately worse than that of the AETF-based approach with B partitions. For P = 2 , all conditions under test perform similarly.

A performance comparison for T 60 = 0.15s , is shown in Fig. 7. In particular, Fig. 7 depicts a comparison between AETF and RETF-based AEC with Γ 60 = 0.15s and L = 256 taps.

The results depicted in the top-left and top-right sub-figures correspond to P = 1 and 2 for the baseline set-up. It can be observed that for P = 1 , the RETF-based approach outperforms the AETF-based one with the same number of causal partitions. For P = 2, the performance of the AETF-based approach is visibly enhanced, and the advantage obtained by using the RETF-based approach is diminished.

Nevertheless, the RETF-based approach still performs better, and nearly as well as the AETF-based one with 5 = 9 partitions. In the bottom, a comparison for different simulation set-ups is provided for P = 1. On the left, the results with different inter- microphone distances are shown. While, on the right, different distances between loudspeaker and primary microphone are evaluated. It can be observed that, for all conditions under test, enlarging any of these parameters impacts negatively on the canceler's performance. It should be noted that increasing the inter-microphone distance has a higher impact on the proposed approach, and that, in general, /, has a higher impact on the canceler's performance. Still, for the parameters used in these simulations, the proposed approach is able to outperform AETF-based AEC with equal number of causal partitions.

Finally, the results shown in Fig. 8 correspond to the simulated set-up with T 60 = 0.35s . In particular, Fig. 8 illustrates a comparison between AETF and RETF-based AEC with T 60 = 0.35s and L = 1024 taps. The results obtained with P = 1 and 4 partitions are depicted in the left and right sub-figures.

It can be observed that the proposed method outperforms, in both test-cases, the AETF- based approach with the same number of causal partitions. Further, for P = 4 it performs only moderately worse than the AETF-based AEC with 5 = 15. To sum up, it was shown that the proposed approach is able to outperform state-of-the-art AETF-based AEC with equal number of causal partitions. Further, it was demonstrated that by using RETF-based AEC, the number of estimated partitions can be reduced, at the cost of a moderate loss in performance. In the following, the usage of frequency-domain adaptive filters according to embodiments is described.

In particular, a description is provided using partitioned-block frequency-domain adaptive filters (PB-FDAFs) (see, e.g., [24]). In particular, the efficient implementation of frequency- domain adaptive filters (FDAFs) (see, e.g., [24], [26]), which is the frequency counterpart of block-time-domain adaptive filters (see, e.g., [27], [28]), highly differs from the STFT one. For more information on this (see, e.g., [20]) and references therein.

According to some embodiments, the two or more received audio channels and the two or more modified audio channels may, e.g., be channels of a partitioned-block frequency domain, wherein each of the two or more received audio channels and the two or more modified audio channels comprises a plurality of partitions. The reference signal and the first and the second interference signals may, e.g., be signals of the partitioned-block frequency domain, wherein each of the reference signal and the first and the second interference signals comprises a plurality of partitions.

In some embodiments, the second filter unit 122; 322; 522 may, e.g., be configured to determine a filter configuration depending on the first estimation of the first interference signal and depending on the second received audio channel. Moreover, the second filter unit 122; 322; 522 may, e.g., be configured to determine the second estimation of the second interference signal depending on the first estimation of the first interference signal and depending on the filter configuration. Furthermore, the second filter unit 122; 322; 522 may, e.g., be configured to determine the filter configuration for a second time index depending on the filter configuration for a first time index that precedes the second time index in time, depending on the first estimation of the first interference signal for the first time index, and depending on a sample of the second modified audio channel for the first time index.

Subsequently, a description of embodiments using PB-FDAFs is outlined using the overlap-save technique (see, e.g., [25], [29]). The partitioned-block frequency domain formulation of the microphone signal is where the echo signal in the frequency domain is obtained after linearizing the outcome of a circular convolution of length K :

where F is the discrete Fourier transform (DFT) matrix of size K K , and the frequency- domain representation of the b -th AIR partition is given by: where Q is the partition length, and V is the length of the zero-padding. Further, the input loudspeaker signal is formulated as a K K diagonal matrix of the form (see, e.g., [25]),

It should be mentioned that the total number of linear components resulting from the circular convolution in (14) is K - Q + l , yet, in order to simplify the subsequent derivations, according to embodiments, V = K - Q linear components are selected in (14). Now, it should possible to deduce that V is the output signal frame length, and that Q = K - V , is the length of the wrap-around error, such that the general frequency- domain formulation of the output signals e.g. is equal to:

where the time-domain signal samples are denoted by a n (t), with t denoting the discrete time index. For notational brevity, according to embodiments, it is defined:

which are, respectively, a stacked matrix of frequency-domain input matrices, and a stacked vector of frequency-domain AIR partitions. Hereafter, it is possible to concisely formulate (14) as where G is the circular convolution constraining

matrix in the frequency domain.

Applying the latter is equivalent to applying an inverse DFT, rejecting the circular components in the time domain by multiplying the result of a circular convolution with the circular convolution constraining window and transforming the result of the

linearization back to the frequency domain. It is important to emphasize the fact that the formulation in the frequency-domain is causal, as there is no need to account for a look- ahead in order to estimate the AETFs. In the frequency domain, the error signal after cancellation is and a generic PB-FDAF update equation is

where

denotes the circular correlation constraining matrix, where g is the time-domain circular correlation constraining window, and the operator diag{y) generates a diagonal matrix with the elements of v on its main diagonal.

In a similar manner, for the formulation using RETFs one can define the secondary echo signals as

where without loss of generality is the

primary, or reference, echo signal, and are defined analogously to

and h„. It should be remembered that, in contrast to the STFT-domain formulation, frequency domain AETFs and RETFs are causal, e.g. h„(o) and a M (0) do not model any non- causal coefficients. However, depending on the relative position of the primary microphone with respect to the secondary ones, RETFs can be causal or non-causal. Consequently, a look-ahead of P nc partitions of the primary echo signal is needed to account for the possible non-causality of the frequency-domain RETFs a n {p).

In practice, this can be overcome by delaying the secondary microphone signals, as depicted in Fig. 3 in time or frequency domain. For synchronization, the primary error signal after cancellation has to be delayed too. For notational brevity, according to embodiments, it is assumed for now that „ c = 0 .

As in (8), according to embodiments, is approximated by to compute the

estimates of the secondary echo signals: The error signal after cancellation is then equal to and minimizing the cost-function leads to the following expression for the

optimal RETFs in the frequency domain:

Consequently, Newton's method takes the following form if one formulates the adaptive algorithm in the partitioned-block frequency domain, with

In a more general embodiment, the second filter unit 122; 322; 522 is configured to determine the filter configuration in the partitioned-block frequency domain according to wherein indicates the second time index, and wherein indicates the first time

index, and wherein k indicates a frequency index, wherein is the filter

configuration for the second time index, and wherein is the filter configuration for

the first time index, wherein is the first estimation of the first interference signal for

the first time index, wherein C„ is a step-size matrix, wherein is the second

modified audio channel for the first time index, and wherein is a circular convolution

constraining matrix.

In the following, implementation and synchronization aspects of embodiments are considered.

In particular, a detailed description of non-causal ( P„ c > 0 ) implementations according to embodiments is provided.

Due to the possible non-causality of the RETF filters, it is necessary to delay the secondary microphone signals, as depicted in Fig. 3, to ensure that the non-causal coefficients are also modeled by the estimated RETFs in the (PB) frequency domain. There are two strategies to do so: Buffering the input signals to the secondary microphones on a sample basis, e.g., in time domain. This allows the user to keep the lowest possible delay. However, for synchronization, the primary signal after cancellation has to be delayed accordingly, and this implies having to transform e^l) back to the time domain.

Buffering the input signals to the secondary microphones in the frequency domain. Consequently, these have to be delayed on a frame basis, resulting in a higher delay as the one introduced in time domain. The advantage of this option resides in the fact that it is not necessary to transform the primary signal after cancellation to the time domain. Consequently, the multichannel interference canceler can be interfaced directly in the frequency domain to a post-processor.

In the following, two possible implementations are described in detail.

At first, embodiments with delayed secondary microphone signals are considered.

From (17) it is evident that a delay of P nc partitions added to all secondary microphone signals enables the estimation of potential non-causalities of the RETFs a„(p). The corresponding implementation is similar to the one depicted in Fig. 3 with D being an integer multiple of the partition size Q . In this way the first P m partitions of the adaptive filter are used to model non-causal RETF coefficients. With this simple approach at

least 2 partitions are required in order to estimate causal as well as non-causal RETF coefficients, in this simple case the first filter partition models the non-causal coefficients of a„(- l).

Now, embodiments with symmetric gradient constraint are considered.

An improvement of the method described above considers the modification of the gradient constraint G in order to retain a maximum of causal coefficients as as well as non- - casual coefficients of the time-domain circular correlation. To this end, according to embodiments, the constraint G from (16) is approximated in this way

To guarantee an alias-free output after the filtering also the convolution constraint in (14) has to be modified accordingly:

Note that the above constraint discards past samples as well as the most recent output samples of the circular convolution in order to provide the linear convolution output, this causes a delay of samples in the estimates of the secondary echo signals.

These symmetric constraints are nothing but the original time-domain constraints g and g , circularly shifted by samples. Thus the corresponding frequency-domain

representation are then respectively, where the constant

matrix is the frequency-domain equivalent of the circular shift. For a practical implementation the above matrix is not of interest as the constraints are typically applied in the time-domain.

Nevertheless a similar matrix J can be defined to manipulate the signals in the frequency- domain, before and after the usual constraints, obtaining the same selection of linear coefficients provided by (23) and (24). For instance the desired weight update, using Newton's method, can be obtained as By using the above formula according to embodiments, flexibility is gained as the selection of linear coefficients is determined by the definition of J . In fact, J can be tailored to very specific cases, for example it may implement a shift shorter than reducing the number of non-causal coefficients and consequently the system delay.

Now, a summary of implementation approaches using PB-FDAF is provided.

The choice of which implementation to use depends on the application scenario. It is evident that without assumptions on the relative source-microphone positions, the introduction of a certain delay is necessary to achieve high quality filter output. The following table summarizes the presented implementation methods.

In the following, a complexity analysis is described for the specific case in which there is 1 primary channel and N - l secondary channels.

At first, the time domain is considered. Subsequently, an exemplary complexity analysis is provided in terms of additions and multiplications. To this end, denoting the length of the estimated primary filter as L , and the length of the N - l estimated secondary filters as P , and assume that the primary and secondary filters are estimated using adaptive filtering techniques. The complexity per input signal sample of an adaptive filter in the time domain is where and the complexity of the update equation 0(Update,M) depends on the adaptive algorithm used and, in many cases, also on the filter length. Hence, if N adaptive filters are used in parallel (one per microphone), the algorithmic complexity of multi-microphone AEC is NO(AF) . The proposed method is able to reduce the algorithmic complexity by reducing the length of the adaptive filters. The reduction in algorithmic complexity is then given by the ratio

In general, if the same adaptive algorithm is used for the estimation of both the primary and secondary filters, then the ratio is given by

The simplest example would be: if the least-mean squares (L S) algorithm (see, e.g., [2]) is used for the primary and secondary echo cancelers, 0(Update) = \ is independent on the filter length, and the ratio is given by

If different adaptive filters are used for the estimation of the primary and secondary filters, the computational complexity of the individual algorithms has to be carefully considered. Now, the STFT domain is considered.

In the following, the complexity in terms of additions and multiplications is analyzed. To this end, first looking into the complexity per partition of an adaptive filter in the STFT domain, which is

where is the complexity of a fast Fourier

transform (FFT), 0(CplxMult) = 6K is the complexity of a complex multiplication of length K (see, e.g., [30]) and the complexity of the update equation 0(Update) depends on the adaptive algorithm used. Hence, if N adaptive filters are used in parallel (one per microphone), the algorithmic complexity of multi-microphone AEC per partition is

NO(AF) . The proposed method is able to reduce the algorithmic complexity if P < B . The reduction in algorithmic complexity is then given by the ratio

Consequently, if the same adaptive filter is used for the primary and secondary echo cancelers, the ratio is given by

If different adaptive filters are used for the AETF and RETF estimation, the computational complexity of the individual algorithms has to be carefully considered.

Specific applications of embodiments may, e.g., implement a low complexity solution to MC-AEC for the following applications:

- Smart phones, tablets and personal computers.

- Voice-activated assistants, smart speakers and smart-home devices.

- Smart televisions. Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus. Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software or at least partially in hardware or at least partially in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable. Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed. Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier. Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.

In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.

A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory.

A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.

A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.

A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.

A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.

In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are preferably performed by any hardware apparatus.

The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.

The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein. References [1 ] E. HSnsler and G. Schmidt, "Acoustic Echo and Noise Control: A practical Approach" New Jersey, USA: Wiley, 2004.

S. Haykin, "Adaptive Filter Theory", 4th ed. New Jersey, USA: Prentice-Hall, 2001.

W. Kellermann, "Strategies for combining acoustic echo cancellation and adaptive beamforming microphone arrays", in Proc. IEEE ICASSP, Munich, Germany, Apr. 1997, pp. 219-222.

0. Shalvi and E. Weinstein, "System identification using nonstationary signals", IEEE Trans. Signal Process., vol. 44, no. 8, pp. 2055-2063, 1996.

S. Gannot, D. Burshtein, and E. Weinstein, "Signal enhancement using beamforming and nonstationarity with applications to speech," IEEE Trans. Signal Process., vol. 49, no. 8, pp. 1614-1626, Aug. 2001.

1. Cohen, "Relative transfer function identification using speech signals," IEEE Trans. Speech Audio Process., vol. 12, no. 5, pp. 451-459, Sep. 2004.

R. Talmon, I. Cohen, and S. Gannot, "Relative transfer function identification using convolutive transfer function approximation," IEEE Trans. Audio, Speech, Lang. Process., vol. 17, no. 4, pp. 546-555, May 2009.

G. Reuven, S. Gannot, and I. Cohen, "Joint noise reduction and acoustic echo cancellation using the transfer-function generalized sidelobe canceller," Speech Communication, vol. 49, no. 7-8, pp. 623-635, Aug. 2007.

R. Talmon, I. Cohen, and S. Gannot, "Convolutive transfer function generalized sidelobe canceler," IEEE Trans. Audio, Speech, Lang. Process., vol. 17, no. 7, pp. 1420-1434, Sep. 2009.

[10] T. Dvorkind and S. Gannot, "Speaker localization in a reverberant environment," in Proc. the 22nd convention of Electrical and Electronics Engineers in Israel (IEEEI), Tel-Aviv, Israel, Dec. 2002, pp. 7-7. [1 1] T. G. Dvorkind and S. Gannot, "Time difference of arrival estimation of speech source in a noisy and reverberant environment," Signal Processing, vol. 85, no. 1 , pp. 177-204, Jan. 2005.

[12] X. Li, L. Girin, R. Horaud, and S. Gannot, "Estimation of the direct-path relative transfer function for supervised sound-source localization," IEEE Trans. Audio, Speech, Lang. Process., vol. 4, no. 11 , pp. 2171 - 2186, Nov. 2016. [13] C. Yemdji, M. Mossi Idrissa, N. Evans, C. Beaugeant, and P. Vary, "Dual channel echo postfiltering for hands-free mobile terminals," in Proc. IWAENC, Aachen, Germany, Sep. 2012, pp. 1-4.

[14] C. Yemdji, L. Lepauloux, N. Evans, and C. Beaugeant, "Method for processing an audio signal and audio receiving circuit," U.S. Patent 2014/0 334 620, 2014.

[15] W. Kellermann, "Joint design of acoustic echo cancellation and adaptive beamforming for microphone arrays," in Proc. Intl. Workshop Acoust. Echo Noise Control (IWAENC), London, UK, 1997, pp. 81-84.

[16] W. Herbordt and W. Kellermann, "GSAEC - acoustic echo cancellation embedded into the generalized sidelobe canceller," in Proc. European Signal Processing Conf. (EUSIPCO), vol. 3, Tampere, Finland, Sep. 2000, pp. 1843-1846. [17] W. Herbordt, W. Kellermann, and S. Nakamura, "Joint optimization of LCMV beamforming and acoustic echo cancellation," in Proc. European Signal Processing Conf. (EUSIPCO), Vienna, Austria, Sep. 2004, pp. 2003-2006.

[18] K.-D. Kammeyer, M. Kallinger, and A. Mertins, "New aspects of combining echo cancellers with beamformers," in Proc. IEEE ICASSP, vol. 3, Philadelphia, USA,

Mar. 2005, pp. 137-140.

[19] Y. Avargel and I. Cohen, "Adaptive system identification in the short-time fourier transform domain using cross-multiplicative transfer function approximation," IEEE Trans. Audio, Speech, Lang. Process., vol. 6, no. 1 , pp. 162 - 173, Jan. 2008. [20] "System identification in the short-time Fourier transform domain with crossband filtering," IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 4, pp. 1305- 1319, May 2007. [21] "On multiplicative transfer function approximation in the short-time fourier transform domain," IEEE Signal Process. Lett., vol. 14, no. 5, pp. 337 - 340, May 2007.

[22] I. Cohen, "Speech enhancement using a noncausal a priori SNR estimator," IEEE Signal Process. Lett., vol. 1 1 , no. 9, pp. 725-728, Sep. 2004.

[23] J. B. Allen and D. A. Berkley, "Image method for efficiently simulating small-room acoustics," J. Acoust. Soc. Am., vol. 65, no. 4, pp. 943-950, Apr. 1979. [24] P. C. W. Sommen, "Partitioned frequency-domain adaptive filters," in Proc.

Asilomar Conf. on Signals, Systems and Computers, 1989, pp. 677-681.

[25] J. J. Shynk, "Frequency-domain and multirate adaptive filtering," IEEE Signal Process. Mag., vol. 9, no. 1 , pp. 14-37, Jan. 1992.

S. Haykin, "Adaptive Filter Theory", 4th ed. Prentice-Hall, 2002.

[27] M. Dentino, J. McCool, and B. Widrow, "Adaptive filtering in the frequency domain," Proc. of the IEEE, vol. 66, no. 12, pp. 1658 - 1659, Dec. 1978.

[28] G. A. Clark, S. R. Parker, and S. K. Mitra, "A unified approach to time- and frequency-domain realization of FIR adaptive digital filters", IEEE Trans. Acoust., Speech, Signal Process., vol. 31 , no. 5, pp. 1073 - 1083, Oct. 1983. [29] A. Oppenheim and R. W. Schafer, "Digital Signal Processing", 2nd ed. Prentice- Hall Inc., Englewood Cliff, NJ, 1993.

[30] R. M. M. Derkx, G. P. M. Engelmeers, and P. C. W. Sommen, "New constraining method for partitioned block frequency-domain adaptive filters", IEEE Trans. Signal Process., vol. 50, no. 3, pp. 2177-2186, 2002.