Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
HEARING APPARATUS
Document Type and Number:
WIPO Patent Application WO/2016/156595
Kind Code:
A1
Abstract:
Method of operating a hearing apparatus(1) and hearing apparatus (1), comprising at least one of a first microphone (4) and a second microphone (5) which generate a first microphone signal (yL) and a second microphone signal (yR ) respectively, the first microphone (4) and the second microphone (5) being arranged in at least one of a first hearing device (2) and a second hearing device (3), a third microphone (11) which generates a third microphone signal (z), the third microphone (11) being arranged in an external device (10), and a signal processing unit (14), wherein in the signal processing unit (14) the third microphone signal (z) and at least one of the first microphone signal (yL ) and the second microphone signal (yR ) are processed together thereby producing an output signal (zenh ) with an enhanced signal to noise ratio compared to the first microphone signal (yR ) and/or the second microphone signal (yL ).

Inventors:
KAMKAR-PARSI HOMAYOUN (DE)
PUDER HENNING (DE)
YEE DIANNA (DE)
Application Number:
PCT/EP2016/057271
Publication Date:
October 06, 2016
Filing Date:
April 01, 2016
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
SIVANTOS PTE LTD (SG)
KAMKAR-PARSI HOMAYOUN (DE)
PUDER HENNING (DE)
YEE DIANNA (DE)
International Classes:
H04R25/00
Domestic Patent References:
WO2008098590A12008-08-21
WO2007106399A22007-09-20
Foreign References:
US20120020503A12012-01-26
US20140050326A12014-02-20
EP2840807A12015-02-25
EP2161949A22010-03-10
Other References:
BOOTHROYD, A.: "Hearing Aid Accessories for Adults: The Remote FM Microphone", EAR AND HEARING, vol. 25, no. 1, 2004, pages 22 - 33
HAW-KINS, D.: "Comparisons of Speech Recognition in Noise by Mildly-to-Moderately Hearing-Impaired Children Using Hearing Aids and FM Systems", JOURNAL OF SPEECH AND HEARING DISORDERS, vol. 49, 1984, pages 409 - 418
PITTMAN, A.; LEWIS, D.; HOOVER , B.; STELMACHOWICZ P.: "Recognition Performance for Four Combinations of FM System and Hearing Aid Microphone Signals in Adverse Listening Conditions", EAR AND HEARING, vol. 20, no. 4, 1999, pages 279
BERTRAND, A.; MOONEN, M.: "Robust Distributed Noise Reduction in Hearing Aids with External Acoustic Sensor Nodes", EURASIP, vol. 20, no. 4, 1999, pages 279
Attorney, Agent or Firm:
FDST PATENTANWÄLTE (Nürnberg, DE)
Download PDF:
Claims:
Claims

1 . Hearing apparatus(1 ), comprising:

at least one of a first microphone (4) and a second microphone (5) which generate a first microphone signal (yL) and a second microphone signal (yR) respectively, the first microphone (4) and the second microphone (5) being arranged in at least one of a first hearing device (2) and a second hearing device (3),

a third microphone (1 1 ) which generates a third microphone signal (z), the third microphone (1 1 ) being arranged in an external device (10), and

a signal processing unit (14),

wherein in the signal processing unit (14) the third microphone signal (z) and at least one of the first microphone signal (yi_) and the second microphone signal (yR) are processed together thereby producing an output signal (zenh) with an enhanced signal to noise ratio compared to the first microphone signal (yR) and/or the second microphone signal (yi_).

2. Hearing apparatus (1 ) as claimed in claim 1 ,

wherein the external device (10) is one of a mobile device, a smart phone, an acoustic sensor and an acoustic sensor element being part of an acoustic sensor network.

3. Hearing apparatus (1 ) as claimed in one of the preceding claims, wherein the output signal (zenh) is coupled into an output coupler (1 6) of at least one of the first hearing device (2) and the second hearing device (3) for generating an acoustic output signal.

4. Hearing apparatus (1 ) as claimed in one of the preceding claims, wherein the first hearing device (2) and the second hearing device (3) are each embodied as an in-the-ear hearing device, in particular as a completely-in-canal hearing device.

5. Hearing apparatus (1 ) as claimed in one of the preceding claims, wherein the first hearing device (2) comprises the first microphone (4) and wherein the second hearing device (3) comprises the second microphone (5).

6. Hearing apparatus (1 ) as claimed in one of the preceding claims, wherein the signal processing unit (12) comprises an adaptive noise canceller unit (17), into which the third microphone signal (z) and at least one of the first microphone signal (yL) and the second microphone signal (VR) are fed and further combined to obtain the output signal (zer,h)-

7. Hearing apparatus (1 ) as claimed in claim 6,

wherein in the adaptive noise canceller unit (17) at least one of the first microphone signal (yi_) and the second microphone signal (yR) is preprocessed to yield a noise reference signal (ΠΕΜ) and the third microphone signal (z) is combined with the noise reference signal (ΠΕΜ) to obtain the output signal (zenh).

8. Hearing apparatus (1 ) as claimed in claim 7,

wherein in the adaptive noise canceller unit (17) the first microphone signal (yL) and the second microphone signal (yR) are combined to yield the noise reference signal (nEM).

9. Hearing apparatus (1 ) as claimed in 8,

wherein the adaptive noise canceller unit (17) further comprises a target equalization unit (20), in which the first microphone signal (yL) and the second microphone signal (yR) are equalized with regard to target location components and wherein the equalized first microphone signal (yL, EQ) and the equalized second microphone signal (yR, EQ) are combined to yield the noise reference signal (nEM).

10. Hearing apparatus (1 ) as claimed in one of the claims 6 to 9,

wherein the adaptive noise canceller unit (17) further comprises a comparing device (21 ) in which the first microphone signal (yL) and the second microphone signal (yR) are compared for target speech detection, the comparing device (21 ) generating a control signal (spVAD) for controlling the adaptive noise canceller unit (17), in particular such that the adaptive noise canceller unit (17) is adapting only during the absence of target speech activity.

1 1 . Hearing apparatus (1 ) as claimed in one of the claims 6 to 10,

wherein the signal processing unit (14) further comprises a calibration unit (15) and/or a equalization unit (1 6), wherein the third microphone signal (z) and at least one of the first microphone signal (yi_) and the second microphone signal (YR) are fed into the calibration unit (15) for a group delay compensation and/or into the equalization unit (1 6) for a level and phase compensation, and wherein the compensated microphone signals are fed into the adaptive noise canceller unit (17).

Description:
Description

Hearing Apparatus

The invention relates to a hearing apparatus and to a method for operating a hearing apparatus. The hearing apparatus particularly comprises at least one of a first microphone and a second microphone, the first and the second microphone being arranged in at least one of a first hearing device and a second hearing device. The hearing apparatus further comprises a third microphone arranged in an external device, particularly in a cell phone, in a smart phone or in an acoustic sensor network. More specifically, the hearing apparatus comprises a first hearing device and a second hearing device which are interconnected to form a binaural hearing device.

A hearing apparatus using one or more external microphones to enable a directional effect even when using omnidirectional microphones is disclosed, for example, in EP 2 1 61 949 A2.

It is an object of the invention to specifiy a hearing apparatus as well as a method of operating a hearing apparatus, which enable an improvement of the signal to noise ratio of the audio signal to be output to the user.

According to the invention, the object is achieved with a hearing apparatus comprising at least one of a first microphone and a second microphone which generate a first microphone signal and a second microphone signal , respectively, the first microphone and the second microphone being arranged in at least one of a first hearing device and a second hearing device, a third microphone which generates a third microphone signal, the third microphone being arranged in an ex- ternal device (i.e. an external microphone), and a signal processing unit, wherein in the signal processing unit the third microphone signal and at least one of the first microphone signal and the second microphone signal are processed together and/or combined to an output signal with an enhanced signal to noise ratio (SNR) compared to the first microphone signal and/or the second microphone signal. Particularly, the hearing devices are embodied as hearing aids, and in the following description it is further often referred to hearing aids for simplification.

For a given noise scenario, strategic placement of external microphones can offer spatial information and better signal to noise ratio than the hearing aids signals generated by the own internal microphones. Nearby microphones can take advantage of the body of the hearing aid user in attenuating noise signals. For example, when the external microphone is placed in front and close to the body of the hearing aid user, the body shields noise coming from the back direction such that the external microphone picks up a more attenuated noise signal than compared to the hearing aids. This is referred to as the body-shielding effect. The external microphone signals that benefit from the body-shielding effect are then combined with the signals of the hearing aids for hearing aid signal enhancement.

External microphones, i.e. microphones not arranged in a hearing device, are currently mainly used as hearing aid accessories; however, the signals are not combined with the hearing aid signals for further enhancement. Current applications simply stream the external microphone signals to the hearing aids. Common applications include classroom settings where the target speaker, such as the teacher, wears a FM microphone and the hearing aid user listens to the streamed FM microphone signal. See, for example Boothroyd, A., "Hearing Aid Accessories for Adults: The Remote FM Microphone", Ear and Hearing, 25(1 ): 22 - 33, 2004; Hawkins, D., "Comparisons of Speech Recognition in Noise by Mildly-to-Moderately Hearing-Impaired Children Using Hearing Aids and FM Systems", Journal of Speech and Hearing Disorders, 49: 409 - 418, 1984; Pittman, A., Lewis, D., Hoover , B., Stelmachowicz P., "Recognition Performance for Four Combinations of FM System and Hearing Aid Microphone Signals in Adverse Listening Conditions", Ear and Hearing, 20(4): 279, 1999.

There is also a growing research interest in using wireless acoustic sensor networks (WASN's) for signal estimation or parameter estimation in hearing aid algorithms; however, the application of WASN's focuses on the placement of microphones near the targeted speaker or near noise sources to yield estimates of the targeted speaker or noise. See, for example Bertrand, A., Moonen, M. "Robust Distributed Noise Reduction in Hearing Aids with External Acoustic Sensor Nodes", EURASIP, 20(4): 279, 1999.

According to a preferred embodiment of the invention the hearing apparatus comprises a left hearing device and a right hearing device which are interconnected to form a binaural hearing device. Particularly, a binaural communication link between the right and the left hearing device is established to exchange o r transmit audio signals between the hearing devices. Advantageously, the binaural communication link is a wireless link. More preferably, all microphones used in the hearing apparatus are being connected by a wireless communication link.

Preferably, the external device is one of a mobile device (e.g. a portable computer), a smart phone, an acoustic sensor and an acoustic sensor element being part of an acoustic sensor network. A mobile phone or a smart phone can be strategically placed in front of the hearing device user to receive direct signals from a front target speaker or is during conversation with a front target speaker already in an excellent position when it is weared in a pocket. Wireless acoustic sensor networks are used in many different technical applications including hands free telephony in cars or video conferences, acoustic monitoring and ambient intelligence.

According to yet another preferred embodiment the output signal is coupled into an output coupler of at least one of the first hearing device and the second hearing device for generating an acoustic output signal. According to this embodi- ment the hearing device user receives the enhanced audio signal which is output by the signal processing unit using the external microphone signal via the output coupler or receiver of its hearing device.

The signal processing unit is not necessarily located within one of the hearing devices. The signal processing unit may also be a part of an external device. Particularly, the signal processing is executed within the external device, e.g. a mobile computer or a smart phone, and is part of a particular software application which can be downloaded by the hearing device user.

As already mentioned, the hearing device is, for example, a hearing aid. According to yet another advantageous embodiment the hearing device is embodied as an in-the-ear (ITE) hearing device, in particular as a completely-in-canal (CIC) hearing device. Preferably, each of the used hearing devices comprises one si ngle omnidirectional microphone. Accordingly, the first hearing device comprises the first microphone and the second hearing device comprises the second microphone. However, the invention does also cover embodiments where a single hearing device, particularly a single hearing aid, comprises a first and a second microphone.

In another preferred embodiment of the invention the signal processing unit comprises an adaptive noise canceller unit, into which the third microphone signal and at least one of the first microphone signal and the second microphone signal are fed and further combined to obtain an enhanced output signal. The third microphone signal is particularly used like a beamformed signal to enhance the signal to noise ratio by spatial filtering. Due to its strategic placement a third microphone signal as such shows a natural directivity.

Advantageously, within the adaptive noise canceller unit at least one of the first microphone signal and the second microphone signal is preprocessed to yield a noise reference signal and the third microphone signal is combined with the noise reference signal to obtain the output signal. The first and/or the second microphone signal are specifically used for noise estimation due to the aforementioned body-shielding effect.

Preferably, in the adaptive noise canceller unit the first microphone signal and the second microphone signal are combined to yield the noise reference signal Particularly, a difference signal of the first microphone signal and the second microphone signal is formed. In case of a front speaker and a binaural hearing apparatus comprising a left microphone and right microphone, the difference signal can be regarded as an estimation of the noise signal.

According to yet another preferred embodiment of the invention the adaptive noise canceller unit further comprises a target equalization unit, in which the first microphone signal and the second microphone signal are equalized with regard to target location components and wherein the equalized first microphone signal and the equalized second microphone signal are combined to yield the noise reference signal. Assuming a known target direction, according to a preferred embodiment simply a delay can be added to one of the signals. When a target direction of 0° is assumed (i.e. a front speaker) the left and the right microphone signals of a binaural hearing device are approximately equal due to symmetry.

Preferably, the adaptive noise canceller unit further comprises a comparing device in which the first microphone signal and the second microphone signal are compared for target speech detection, the comparing device generating a control signal for controlling the adaptive noise canceller unit, in particular such that the adaptive noise canceller unit is adapting only during the absence of target speech activity. This embodiment has the particular advantage of preventing target signal cancellation due to target speech leakage.

According to another advantageous embodiment the signal processing unit further comprises a calibration unit and/or a equalization unit, wherein the third m icrophone signal and at least one of the first microphone signal and the second microphone signal are fed into the calibration unit for a group delay compensa- tion and/or into the equalization unit for a level and phase compensation, and wherein the compensated microphone signals are fed into the adaptive noise canceller unit. With the implementation of a calibration unit and/or an equalization unit differences between the internal microphone signals and between the internal and external microphone signals in delay time, phase and/or level are compensated.

The invention exploits the benefits of the body shielding effect in an external microphone for hearing device signal enhancement. The external microphone is particularly placed close to the body for attenuating the back directional noise signal. The benefit of the body-shielding effect is particularly useful in single microphone hearing aid devices, such as completely-in-canal (CIC) hearing aids, where attenuation of back directional noise at 180° is not feasible. When using only microphones of the hearing aid system, differentiation between the front (0°) and back (180°) locations is difficult due to the symmetry that exists along the median plane of the body. The external microphone benefitting from the body-shielding effect with the hearing aids does not suffer from this front back ambiguity as back directional noise is attenuated. The signals of the hearing aid microphones can thereby be enhanced to reduce back directional noise by combining the signals of the hearing aids with the external microphone.

The invention particularly offers additional signal enhancement to the hearing device signals instead of simply streaming the external microphone signal. The signal enhancement is provided through combining the signals of the hearing aid with the external microphone. The placement of the external microphone exploits the body- shielding effect, where the microphone is near the hearing aid user. Unlike wireless acoustic sensor networks, the placement of the microphone is not placed to be near the targeted speaker or noise sources.

Further details and advantages of the invention become apparent from the subsequent explanation of several embodiments on the basis of the schematic drawings, not limiting the invention. In the drawings Fig. 1 shows a possible setup of an external microphone benefiting from the body-shielding effect,

Fig. 2 shows a setup with hearing aids and a smartphone microphone, target and interfering speakers,

Fig. 3 depicts an overview of a signal combination scheme and

Fig. 4 shows a more detailed view of an adaptive noise cancellation unit.

Fig. 1 shows an improved hearing apparatus 1 comprising a first, left hearing device 2 and a second, right hearing device 3. The first, left hearing device 2 comprises a first, left microphone 4 and the second, right hearing device 3 comprises a second, right microphone 5. The first hearing device 2 and the second hearing device 3 are interconnected and form a binaural hearing device 6 for the hearing device user 7. At 0° a front target speaker 8 is located. At 180° an interfering speaker 9 is located. A smartphone 10 with a third, external microphone 1 1 is placed between the hearing device user 7 and the front target speaker 8. Behind the user 7 a zone 12 of back directional attenuation exists due to the body- shielding effect. When using the internal microphones 4, 5 of the hearing aid device 6, differentiation between the front (0°) and back (180°) locations is difficult due to the symmetry that exists along the median plane of the body. The external microphone 1 1 benefitting from the body-shielding effect does not suffer from this front-back ambiguity as back directional noise is attenuated. The signals of the hearing device microphones 4, 5 can thereby be enhanced to reduce back directional noise by combining the signals of the hearing device microphones 4, 5 with the signal of the external microphone 1 1 .

Fig. 2 depicts a scenario that is slightly different to the scenario shown in Fig. 1 . An interfering speaker 9 is located at a direction of 135°. The third, external microphone 1 1 , in the following referred to also as EMIC, of a smart phone 10 is placed between the hearing device user 7 and a front target speaker 8. The hearing devices 2, 3 are, for example, completely-in-canal (CIC) hearing aids (HA) which have one microphone 4, 5 in each device. The overall hearing apparatus 1 consists of three microphones 4, 5, 1 1 .

Let y " L,raw (t), y R raw (t) and z raW (t) denote the microphone signals received at the left and right hearing device 2, 3 and at the third external microphone 1 1 respectively at the discrete time sample t. The subband representation of these signals are indexed with k and n where k refers to the k th subband frequency at subband time index n. Before combining the microphone signals between the two devices 2, 3, hardware calibration is needed to match the microphone characteristics of the external microphone 1 1 to the microphones 4, 5 of the hearing devices 2, 3. In the examplary approach, the external microphone 1 1 (EMIC) is calibrated to match one of the internal microphones 4, 5 which serves as a reference microphone. The calibrated EMIC signal is denoted by z ca iib- In this embodiment, the calibration is first completed before applying further processing on the EMIC signal.

To calibrate for differences in the devices, the group delay and microphone characteristics inherent to the devices have to be considered. The audio delay due to analog to digital conversion and audio buffers is likely to be different between the external device 10 and the hearing devices 2, 3, thus requiring care for compensating for this difference in time delay. The group delay of the process between the input signal being received by an internal hearing device microphone 4, 5 and the output signal at a hearing aid receiver (speaker) is orders smaller than in complicated devices like smartphones. Preferably, the group delay of the external device 10 is first measured and then compensated if needed. To measure the group delay of the external device 10, one can simply estimate the group delay of the transfer function which the input microphone signal undergoes as it is transmitted as an output of the system. In the case of a smart phone 10, the input signal is the front microphone signal and the output is obtained through the headphone port. To compensate for the group delay, according to a preferred embodiment y L , ra w and y R,raw are delayed by the measured group delay of the EMIC device. The delayed signals are denoted by y L and y R respectively.

After compensating for different device latencies, it is recommended to use an equalization filter (EQ) which compensates for level and phase differences for microphone characteristics. The EQ filter is applied to match the EMIC signal to either y L or y R , which serves as a reference denoted as y re f. The EQ filter coefficients, h ca i, are calculated off-line and then applied during online processing. To calculate these weights off-line, recordings of a white noise signal is first made where the reference microphone and EMIC are held in roughly the same location in free field. A least-squares approach is then taken to estimate the relative transfer function for the input Zraw to the output y re f (k, n) by minimizing the cost function:

arg min E[/e ca/ (/ )/ 2 ] = Ely ref {k, n) - h cal (k) H ' z raw {k, n)/ 2 .

where z raw (k, n) is a vector of current and past L cai -1 values of z raw (k, ri) and L cai is the length of cai (k).

After calibration, in an examplary study a strategic location of the external microphone 1 1 (EMIC) is considered. For signal enhancement, locations have been explored where the EMIC has a better SNR compared to the signals of the internal microphones 4, 5. It was focused on the scenario shown in Fig. 2 where the external microphone 1 1 is centered and in front of the body of the hearing device user 7 at a distance of 20 cm which is a typical distance for a smartphone usage. The target speaker 7 is located at 0° while the location of the noise interferer 9 is varied along a 1 m radius circle around the hearing device user 7. The location of the speech interferer 9 is varied in 45° increments and each location has an unique speech interferer 9 with different sound levels. The SNR of the EMIC and the CIC hearing aids 2, 3 are then compared when a single speech interferer 9 is active along with the target speaker 8. As a result, it was shown that the raw EMIC signal has a higher SNR than the raw hearing aid signal when the noise interferer 8 is coming from angles in the range of 135-225°. Additionally, it was shown that the SNR of the EMIC has similar performance of a signal processed using an adaptive first order differential beamformer (FODBF) realized on a two microphone behind-the-ear (BTE) hearing device. It should be noted that the FODBF cannot be realized on single microphone hearing aid devices such as the CICs since the FODBF would require at least two microphones in each device. Therefore, the addition of an external microphone 1 1 can lead to possibilities in attenuating noise coming from the back direction for single microphone hearing aid devices 2, 3.

The following exemplary embodiment presents a combination scheme using a Generalized Sidelobe Canceller (GSC) structure for creating an enhanced binaural signal using the three microphones according to a scenario shown in Fig. 1 or Fig. 2, assuming a binaural link between the two hearing devices 2, 3. An ideal data transmission link between the external microphone 1 1 (EMIC) and the hearing devices 2, 3 with synchronous sampling are also assumed.

For combining the three microphone signals, a variant of a GSC structure is considered. A GSC beam-former is composed of a fixed beamformer, a blocking matrix (BM) and an adaptive noise canceller (ANC). The overall combination scheme is shown in Fig. 3 where hardware calibration is first performed on the signal of the external microphone, following with a GSC combination scheme for noise reduction, resulting in an enhanced mono signal referred to as z enh . Accordingly, the signal processing unit 14 comprises a calibration unit 15 and an equalization unit 1 6. The output signals of the calibration and equalization unit 14, 15 are then fed to a GSC- type processing unit 17, which is further referred to as an adaptive noise canceller unit comprising the ANC.

Analogous to a fixed beamformer of the GSC, the EMIC signal is used in place of the beamformed signal due to its body-shielding benefit. The BM combines the signals of the hearing device pair signals to yield a noise reference. The ANC is realized using a normalized least mean squares (NLMS) filter. The GSC structure or the structure of the adaptive noise canceller unit 17, respectively, is shown in Fig. 4 and is implemented in the subband domain. The blocking matrix BM is denoted with reference numeral 18. The ANC is denoted with reference numeral 19.

The scheme used for the BM becomes apparent in Figure 4 where y L , E Q and VR,EQ refer to the left and right hearing device signals after target equalization (in target equalization unit 20) and n B M refers to the noise reference signal. Assuming a known target direction, the target equalization unit 20 equalizes target speech components in the HA pair. In practice, a causality delay is added to the reference signal to ensure a causal system. For example if y L is chosen as the reference signal for target EQ, then

YL,EQ {k, n) = y L {k, n- D TARE Q ) where D TAR EQ is the causality delay added. Then y R is filtered such that the target signals are matched to yi_,EQ:

yR,EQ(k, n) = hta rE Q R (k, n) where yR is a vector of current and past L TAR EQ - 1 values of yR and L TAR EQ is the length of h TAR EQ- The noise reference n B M (k, n) is then given by

In practice, an assumption of a zero degree target location is commonly used in HA applications. This assumes that the hearing device user wants to hear sound that is coming from the centered front which is natural as one tends to face the desired speaker during conversation. When a target direction of 0° is assumed, the left and right hearing device target speaker signals are approximately equal due to symmetry. In this case, target equalization is not crucial and the following assumptions are made

VL,EQ {k, n) as y L {k, n) and y K , E Q (k, n) ¾ y {k, n). The ANC is implemented with a subband N LMS algorithm. The purpose of the ANC is to estimate and remove the noise in the EM IC signal, z ca iib- The result is an enhanced EM I C signal. One of the inputs of the ANC is ΙΊ Β Μ, a vector of length L A NC containing the current and L A NC - 1 pass values of n B M■ A causality delay, D, is introduced to Zcaiib to ensure a causal system. d(k n) - ZeaUh (k, n - D) where d(k, n) is the primary input to the N LMS.

Zenh {k, n) = e(k, n) = d(k, ri)— hANc (k, n) H nBAf (k, n) and the filter coefficient vector, IIANC [k, n), is updated by

hANc (k, n+l) = h A Nc (k,,

where μ(Ι<) is the NLMS step size. The regularization factor 5(k) is calculated by 5(k) = aPz (k) where Pz (k) is the average power of the EMIC microphone noise after calibration and a is a constant scalar. It was found that a = 1 .5 was sufficient for avoiding division by zero during the above calculation.

To prevent target signal cancellation due to target speech leakage in n B M, the NLMS filter is controlled such that it is adapted only during the absence of target speech activity. The target speech activity is determined by comparing in a comparing device 21 (see Fig. 4) the following power ratio to a threshold T k . The power ratio considers the average power of the difference of the HA signals over average power of the sum. When target speech is active, the numerator of the ratio in the above formula is less than the denominator. This is due to equalization of the target signal components between the HA pair, thereby subtraction leads to cancellation of the target signal. The noise components, generated by interferers as point sources, are uncorrelated and would not cancel. The power of the difference versus the addition of the noise components would be roughly the same. When the ratio in the above equation is less than a predetermined threshold, T k , target activity is present.

Using separate speech and noise recordings, the Hagerman method for evaluating noise reduction algorithms is used to evaluate the effect of GSC processing on the speech and noise separately. The target speech and noise signals are denoted with the subscripts of s and n respectively to differentiate between target speech and noise. Let s(k, n) denote the vector of target speech signals and n(k, n) denote the vector of noise signals where s(k, n) = [y L,s (k, n), y R s (k, n), z s (k, n)] and

n(k,n) = [y L , n (k, n), y^n (k, n), z n (k, n)]. We then define two vectors of input signals of which GSC processing is performed on, (k, n) = s(k, n)+ n(k, n) and bin (k, n) = s(k, n) - n(k, n). The resulting processed outputs are denoted by a ou t (k, n) and bout (k, n) respectively. The output of the GSC processing is the enhanced EMIC signal as shown in Figure 3. The processed target speech signal is estimated using Zenh.s (k, n) = 0.5(a ou t (k, n) + b ou t (k, n)) and the processed noise signals is estimated using z enh n (k, n) = 0.5(a ou t (k, n) - b ou t (k, n)). Following the setup in Figure 2, the GSC method is tested in various back directional noise scenarios. Using the separately processed signals, z enh s (k, n) and z enh n (k, n), the true SNR values of the GSC enhanced signals and raw microphone signals are calculated in decibels and summarized in the following Table 1 . The segmental SNR is calculated in the time domain using a block size of 30 ms and 50% overlap.

Table 1 : Measures of GSC Performance in c!B.

Interferer SISiR SNR SNR of SNR of

Location of i, of yn Zc lib Zenh

135° 7.2 0.9 10.8 15.2 18.2 4.2

180° 5.5 5.0 1 1.2 11.2 28.5 1.3e-2

225° 5.3 7.9 13.9 16.9 19.0 3.1

13 * 22 3.1 0.1 9.1 9.9 21.5 0.8 Comparing the SNR of the calibrated external microphone signal to the HA pair, it is clear that the EMIC provides significant SNR improvement. Without GSC processing, strategic placement of the EMIC resulted on average at least 5 dB SNR improvement compared to the raw CIC microphone signal of the better ear. The result of GSC processing leads to further enhancement of at least 2 dB on average when there are noise interferers located at 135° or 225°.

In addition to SNR, speech distortion and noise reduction is also evaluated in the time domain to quantify the extent of speech deformation and noise reduction resulted from GSC processing. The speech distortion, P s _dist, is estimated by comparing d s , the target speech signal in d prior to GSC processing, and the enhanced signal Zenh.s , over M frames of N samples. N is chosen to correspond to 30 ms of samples and the frames have an overlap of 50%. The equation used is:

The noise reduction is estimated using:

Pn.red— lOloQ

where d n refers to the noise signal in d. These measurements are represented in decibels and are shown also in Table 1 .

External microphones have been proven to be a useful hearing device accessory when placed in a strategic location where it benefits from a high SNR. Addressing the inability for single microphone binaural hearing devices to attenuate noise from the back direction, the invention leads to attenuation of back interferers due to the body-shielding effect. The presented GSC noise reduction scheme provides further enhancement of the EMIC signal for SNR improvement with minimal speech distortion.

List of references

Hearing apparatus

First, left hearing device

Second, right hearing device

First, left microphone

Second, right microphone

Binaural hearing device

Hearing device user

Front speaker

Interfering speaker

external device, e.g. a smartphone

Third, external microphone

Zone of attenuation

Signal processing unit

Calibration unit

Equalization unit

Adaptive noise canceller unit

Blocking matrix

Adaptive noise canceller

Target equalization unit

Comparing device