Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
VOICE ISOLATION SYSTEM
Document Type and Number:
WIPO Patent Application WO/2019/136475
Kind Code:
A1
Abstract:
The disclosure includes a voice isolation system comprising an acoustic echo-cancelation subsystem configured to receive a plurality of input signals, subtract an interference component from the input signals, and provide a plurality of output signals. The system also includes an adaptive beamformer subsystem configured to receive the plurality of output signals from the acoustic echo-cancelation subsystem and compute a signal-to-noise ratio enhanced signal based on the received output signals. The system also includes a residual noise suppressor subsystem configured to attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based on the at least one portion having an SNR below a predetermined SNR threshold. The system also includes an automatic gain control subsystem configured to process a signal outputted from the residual noise suppressor subsystem and transmit a resulting signal as an output signal.

Inventors:
WURTZ DAVID (US)
WURTZ MICHAEL (US)
KUMAR AMIT (US)
DOOLITTLE COLIN (US)
Application Number:
PCT/US2019/012767
Publication Date:
July 11, 2019
Filing Date:
January 08, 2019
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
AVNERA CORP (US)
International Classes:
G10L21/0208; G10K11/175; H04M9/08; H04R3/00; G10L21/0216; G10L25/78; H04R5/033
Domestic Patent References:
WO2018219582A12018-12-06
Foreign References:
US20110293103A12011-12-01
US20170263267A12017-09-14
EP1729492A22006-12-06
Other References:
BUCK M ET AL: "Chapter 8: Acoustic Array Processing for Speech Enhancement", 24 July 2009, HANDBOOK ON ARRAY PROCESSING AND SENSOR NETWORKS, WILEY-IEEE PRESS, HOBOKEN, NJ, PAGE(S) 231 - 268, ISBN: 978-0-470-37176-3, XP002610855
Attorney, Agent or Firm:
ROSS, Kevin, S. et al. (US)
Download PDF:
Claims:
WE CLAIM:

1. A voice isolation system, comprising:

an acoustic echo-cancelation subsystem configured to receive a plurality of input signals, subtract an interference component from the input signals, and provide a plurality of output signals;

an adaptive beamformer subsystem configured to receive the plurality of output signals from the acoustic echo-cancelation subsystem and compute a signal-to-noise ratio (SNR) enhanced signal based on the received output signals;

a residual noise suppressor subsystem configured to attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based on the at least one portion having an SNR below a predetermined SNR threshold; and

an automatic gain control (AGC) subsystem configured to process a signal outputted from the residual noise suppressor subsystem and transmit a resulting signal as an output signal.

2. The voice isolation system of claim 1, wherein the plurality of input signals includes a headphone audio signal.

3. The voice isolation system of claim 2, further comprising a headphone configured to provide the headphone audio signal.

4. The voice isolation system of claim 1, wherein the plurality of input signals includes a feedforward signal and a feedback signal.

5. The voice isolation system of claim 4, further comprising:

a feedforward microphone configured to provide the feedforward signal; and a feedback microphone configured to provide the feedback signal.

6. The voice isolation system of claim 5, further comprising a filter coupled with each of the microphones, wherein each filter is derived from a real-time estimate of signal, interference, and noise spectra for the corresponding microphone.

7. The voice isolation system of claim 1, further comprising a user voice activity detection (UVAD) subsystem configured to determine whether a user’s speech is present and provide a control signal based on the determination.

8. The voice isolation system of claim 7, wherein the residual noise suppressor subsystem is further configured to receive the control signal from the UVAD subsystem and attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based at least in part on the control signal.

9. The voice isolation system of claim 7, wherein the adaptive beamformer subsystem is further configured to receive the control signal from the UVAD subsystem and compute the SNR-enhanced signal based at least in part on the control signal.

10. The voice isolation system of claim 1, wherein the AGC subsystem is configured to process the signal outputted from the residual noise suppressor subsystem by attenuating the signal outputted from the residual noise suppressor subsystem with a root mean square (RMS) envelope below a predetermined threshold.

11. A method for voice isolation, said method comprising:

an acoustic echo-cancelation subsystem receiving a plurality of input signals;

the acoustic echo-cancelation subsystem generating a plurality of output signals by subtracting an interference component from the input signals;

an adaptive beamformer subsystem computing a signal-to-noise ratio (SNR) enhanced signal based at least in part on a plurality of output signals received from the acoustic echo- cancelation subsystem;

a residual noise suppressor subsystem attenuating at least one portion of the SNR enhanced signal received from the acoustic echo-cancelation subsystem based on the at least one portion having an SNR below a predetermined SNR threshold;

an automatic gain control (AGC) subsystem processing a signal outputted from the residual noise suppressor subsystem; and

the AGC subsystem transmitting a resulting signal as an output signal.

12. The method of claim 11, further comprising a user voice activity detection (UVAD) subsystem determining whether a user’s speech is present and providing a control signal based on the determination.

13. The method of claim 12, further comprising the adaptive beamformer subsystem computing the SNR-enhanced signal based at least in part on the control signal from the UVAD subsystem.

14. The method of claim 12, further comprising the residual noise suppressor subsystem attenuating the SNR enhanced signal received from the acoustic echo-cancelation subsystem based at least in part on the control signal from the UVAD subsystem.

15. The method of claim 11, wherein the plurality of input signals includes a feedforward signal, a feedback signal, and a headphone audio signal.

16. A headset, comprising:

one or more earphones including one or more sensing components;

one or more voice microphones to record a voice signal for voice transmission; and a processor coupled to the earphones and the voice microphones, the processor configured to execute:

an acoustic echo-cancelation subsystem to receive a plurality of input signals, subtract an interference component from the input signals, and provide a plurality of output signals;

an adaptive beamformer subsystem to receive the plurality of output signals from the acoustic echo-cancelation subsystem and compute a signal-to-noise ratio (SNR) enhanced signal based on the received output signals;

a residual noise suppressor subsystem to attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based on the at least one portion having an SNR below a predetermined SNR threshold; and

an automatic gain control (AGC) subsystem to process a signal outputted from the residual noise suppressor subsystem and transmit a resulting signal as an output signal.

17. The headset of claim 16, wherein the processor is further configured to execute a user voice activity detection (UVAD) subsystem to determine whether a user’s speech is present and provide a control signal based on the determination.

18. The headset of claim 16, wherein the plurality of input signals includes a headphone audio signal.

19. The headset of claim 16, wherein the plurality of input signals includes a feedforward signal and a feedback signal.

20. The headset of claim 16, wherein the AGC subsystem is configured to process the signal outputted from the residual noise suppressor subsystem by attenuating the signal outputted from the residual noise suppressor subsystem with a root mean square (RMS) envelope below a predetermined threshold.

Description:
Voice Isolation System

TECHNICAL FIELD

[0001] Embodiments of the disclosed technology are generally directed to enhancements of a user speech signal as measured by a communication headset such as, for example, a noise- canceling earbud.

BACKGROUND

[0002] People often participate in voice calls in noisy environments. Some headsets have active noise cancelation capability but, while this may reduce the perceived noise level for the user, it does not reduce the noise level for the far-end listener(s) on the voice call. Thus, the far-end listener(s) do not receive the same benefit as the user with regard to the active noise cancelation capability.

[0003] Embodiments described in this disclosure address these and other limitations of the prior art.

BRIEF DESCRIPTION OF THE DRAWINGS

[0004] Aspects, features and advantages of embodiments of the present disclosure will become apparent from the following description of embodiments in reference to the appended drawings in which:

[0005] FIGURE l is a schematic diagram of an example headset in accordance with certain implementations of the disclosed technology.

[0006] FIGURE 2 illustrates an example of a functional block diagram of a voice isolation system in accordance with certain implementations of the disclosed technology.

[0007] FIGURE 3 illustrates a first example of a user voice activity detection subsystem in accordance with certain implementations of the disclosed technology.

[0008] FIGURE 4 illustrates a second example of a user voice activity detection subsystem in accordance with certain implementations of the disclosed technology.

[0009] FIGURE 5 illustrates an example of a voice isolation method in accordance with certain implementations of the disclosed technology. DETAILED DESCRIPTION

[0010] Embodiments of the disclosed technology generally enhance a user’s speech signal as measured by a communication headset. To do so, embodiments of the disclosed technology generally take advantage of an asymmetry in speech coupling that is characteristic of some acoustic platforms such as noise canceling earbuds, for example. This asymmetry typically arises due to one of the microphones (e.g., the feedback microphone in a noise-canceling system) being partially coupled to the user’s speech through bone conduction, while another one of the microphones (e.g., the feedforward microphone in the noise-canceling system) may be predominantly coupled to the user’s speech acoustically.

[0011] The latter microphone may also be acoustically coupled to the ambient environment. This asymmetry in speech coupling is a mechanism that may be exploited by embodiments of the disclosed technology to enhance the user’s speech signal.

[0012] Embodiments of the disclosed technology generally include a voice isolation system in a communication headset having a headphone speaker and a feedback microphone. The headset may also include a feedforward microphone and/or be coupled to one or more ambient microphones that are separate from the headset.

[0013] The voice isolation system may be configured to generate a first covariance signal between the feedback microphone and an audio signal from the headphone speaker as well as a second covariance signal between the ambient/feedforward microphone and the audio signal from the headphone speaker.

[0014] Filters may be applied to the microphones based on estimates of the signal, interference, and noise spectra on each microphone, which, in turn, may be estimated from an analysis of the first and second covariance signals. An SNR enhanced signal may be provided as an output from a sum of the filtered microphone measurements.

[0015] FIGETRE 1 is a schematic diagram of an example headset 100 in accordance with certain implementations of the disclosed technology. In the example, the headset 100 includes a right earphone 110, a left earphone 120, and a lapel unit 130. However, it should be noted that certain mechanisms disclosed herein may be employed in an example headset including a single earphone and/or an example without a lapel unit 130. The headset 100 may be configured to perform local ANC, for example when the lapel unit 130 is coupled to a device that plays music files. The headset 100 may also perform unlink noise cancellation, for example when the lapel unit 130 is coupled to a device capable of making phone calls (e.g. a smart phone). [0016] The right earphone 110 may be a device capable of playing audio data, such as music and/or voice from a remote caller. The right earphone 110 may be crafted as a headphone that can be positioned adjacent to a user’s ear canal (e.g. on ear). The right earphone 110 may also be crafted as an earbud, in which case at least some portion of the right earphone 110 may be positioned inside a user’s ear canal (e.g. in-ear).

[0017] The right earphone 110 generally includes at least a speaker 115 and a feedforward (FF) microphone 111. The right earphone 110 may also include a feedback (FB) microphone 113 and/or one or more sensors 117. The speaker 115 may be any transducer capable of converting voice signals, audio signals, and/or ANC signals into soundwaves for communication toward a user’s ear canal, for example.

[0018] The FB microphone 113 and the speaker 115 may be positioned together on a proximate wall of the right earphone 110. Depending on the example, the FB microphone 113 and speaker 115 may be positioned inside a user’s ear canal when engaged (e.g. for an earbud) or positioned adjacent to the user’s ear canal in an acoustically sealed chamber when engaged (e.g. for an earphone). The FB microphone 113 may be configured to record soundwaves entering the user’s ear canal. Hence, the FB microphone 113 generally detects ambient noise perceived by the user, audio signals, remote voice signals, the ANC signal, and/or the user’s voice which may be referred to as a sideband signal.

[0019] As the FB microphone 113 detects both the ambient noise perceived by the user and any portion of the ANC signal that is not destroyed due to destructive interference, the FB microphone 113 signal may contain feedback information. The FB microphone 113 signal can be used to adjust the ANC signal in order to adapt to changing conditions and to better cancel the ambient noise.

[0020] In the example, the FF microphone 111 is positioned on a distal wall of the earphone and maintained outside of the user’s ear canal and/or the acoustically sealed chamber, depending on the example. The FF microphone 111 is acoustically isolated from the ANC signal and generally isolated from remote voice signals and audio signals when the right ear phone is engaged. The FF microphone 111 records ambient noise as user voice/sideband.

Accordingly, the FF microphone 111 signal can be used to generate an ANC signal.

[0021] The FF microphone 111 signal is better able to adapt to high frequency noises than the FB microphone 113 signal. However, the FF microphone 111 cannot detect the results of the ANC signal, and hence cannot adapt to non-ideal situations, such as a poor acoustic seal between the right earphone 110 and the ear. As such, the FF microphone 111 and the FB microphone 113 can be used in conjunction to create an effective ANC signal. [0022] The right earphone 110 may also include sensing components to support off ear detection (OED). In some examples, the FB microphone 113 and the FF microphone 111 are employed as sensing components. In such a case, the FB microphone 113 signal and the FF microphone 111 signal are different when the right earphone 110 is engaged due to the acoustic isolation between the earphones. When the FB microphone 113 signal and the FF microphone 111 signal are similar, the headset 100 can determine that the corresponding earphone 110 is not engaged.

[0023] In other examples, sensors 117 can be employed as sensing components to support OED. For example, the sensors 117 may include an optical sensor that indicates low light levels when the right earphone 110 is engaged and higher light levels when the right earphone 110 is not engaged. In other examples, the sensors 117 may employ pressure and/or electrical/magnetic currents and/or fields to determine when the right earphone 110 is engaged or disengaged. In other words, the sensors 117 may include capacitive sensors, infrared sensors, visual light optical sensors, etc.

[0024] The left earphone 120 is substantially similar to the right earphone 110, but configured to engage with a user’s left ear. Specifically, the left earphone 120 may include sensors 127, speaker 125, a FB microphone 123, and a FF microphone 121, which may be substantially similar to the sensors 117, the speaker 115, the FB microphone 113, and the FF microphone 121. The left earphone 120 may also operate in substantially the same manner as the right earphone 110 as discussed above.

[0025] The left earphone 120 and the right earphone 110 may be coupled to a lapel unit 130 via a left cable 142 and a right cable 141, respectively. The left cable 142 and the right cable 141 are any cables capable of conducting audio signals, remote voice signals, and/or ANC signals from the lapel unit to the left earphone 120 and the right earphone 110, respectively.

[0026] The lapel unit 130 is an optional component is certain implementations. The lapel unit 130 generally includes one or more voice microphones 131 and a signal processor 135. The voice microphones 131 may be any microphone configured to record a user’s voice signal for uplink voice transmission, for example during a phone call. In some examples, multiple microphones may be employed to support beamforming techniques. The term beamforming as used herein generally refers to a spatial signal processing technique that employs multiple receivers to record the same wave from multiple physical locations. A weighted average of the recording may then be used as the recorded signal. [0027] By applying different weights to different microphones, the voice microphones 131 can be virtually pointed in a particular direction for increased sound quality and/or to filter out ambient noise. It should be noted that the voice microphones 131 may also be positioned in other locations in some examples. For example, the voice microphones 131 may hang from cables 141 or 142 below the right earphone 110 or the left earphone 120, respectively. The beamforming techniques disclosed herein are equally applicable to such a scenario with minor geometric modifications.

[0028] In the example, the signal processor 135 is coupled to the left earphone 120 and right earphone 110, via the cables 142 and 141, and to the voice microphones 131. The signal processor 135 is any processor capable of generating an ANC signal, performing digital and/or analog signal processing functions, and/or controlling the operation of the headset 100. The signal processor 135 may include and/or be connected to memory, and hence may be programmed for particular functionality.

[0029] The signal processor 135 may also be configured to convert analog signals into a digital domain for processing and/or convert digital signals back to an analog domain for playback by the speakers 115 and 125. The signal processor 135 may be implemented as a general purpose processor, and application specific integrated circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), or combinations thereof.

[0030] FIGURE 2 illustrates an example of a functional block diagram of a voice isolation system 200 in accordance with certain implementations of the disclosed technology. In the example, the voice isolation system 200 includes an acoustic echo-cancelation subsystem 201, an adaptive beamformer subsystem 202, a user voice activity detection subsystem 203, a residual noise suppression subsystem 204, and an automatic gain control subsystem 205. It will be appreciated that any or all of the acoustic echo-cancelation subsystem 201, adaptive beamformer subsystem 202, user voice activity detection subsystem 203, residual noise suppression subsystem 204, and automatic gain control subsystem 205 may be executed by a processor such as the signal processor 135 illustrated by FIGURE 1, for example.

[0031] In the example, the acoustic echo-cancelation subsystem 201 is configured to receive a first microphone signal 206 (such as a feedforward signal, for example), a second microphone signal 207 (such as a feedback signal, for example), and a headphone audio signal 208. The acoustic echo-cancelation subsystem 201 may be configured to subtract the interference component (e.g., due to the headphone loudspeaker) from the input signals. In certain embodiments, filters may be applied to each microphone that are derived from real- time estimates of the signal, interference, and noise spectra on each microphone. [0032] These signal, interference, and noise spectra may be estimated from an analysis of the covariance between each microphone and additionally the headphone audio signal. This covariance analysis generally takes advantage of the asymmetry in speech coupling between microphones as described above. For example, at certain frequencies, the feedback microphone may have significant speech coupling (e.g., due to bone conduction) and weak acoustic coupling to ambient noise (e.g., due to active noise cancelation), while having significant interference from the headphone loudspeaker. Conversely, the ambient microphone may have substantially equal acoustic coupling to speech and ambient noise but weaker - or even negligible - interference from the headphone loudspeaker.

[0033] In the example, the adaptive beamformer subsystem 202 may be configured to compute a signal-to-noise ratio (SNR-) enhanced signal from output signals of the acoustic echo-cancelation subsystem 201, and the user voice activity detection (UVAD) subsystem 203 may be configured to provide control signals to indicate the likely presence or absence of the user’s speech. It should be noted that the user voice activity detection subsystem 203 may not be present in every implementation.

[0034] FIGURE 3 illustrates a first example of a user voice activity detection subsystem 300 in accordance with certain implementations of the disclosed technology. In the example, an input 310 carries an input microphone signal from a microphone that may be a specific user microphone, part of a noise-canceling headphone, or a microphone that is built-in or coupled to a device, for example. Although a single microphone input 310 is illustrated, it will be appreciated that the input 310 may carry input from any number of microphones. Additionally, as used herein, the term microphone generally refers to any apparatus or transducer configured to produce an electrical signal from a physical quantity such as sound or vibration.

[0035] In the example, a User Voice Activity Detector (UVAD) 320 is configured to receive the input 310, evaluate the input, and generate a control signal based on the input evaluation. The control signal is generally active when the user’s voice is sensed on the input 310, and is generally not active at any other time. More specifically, the UVAD 320 may accurately detect the presence or absence of the user’s voice, even when the user’s voice is one of multiple voices carried by the microphone input 310. The UVAD 320 may be configured to output a control signal 322, which may be a“1” when the user is actively speaking and a“0” when the user is not actively speaking, for example.

[0036] In certain embodiments, the control signal 322 may be used to control a passgate

330 or other control structure that is configured to control whether the input 310 is to be passed to a Speech Recognition Engine (SRE) 340. In operation, if a user is not actively speaking, the UVAD 320 generally generates the proper control signal 322 to block the microphone input 310 from being passed to the SRE 340. In this manner, the SRE 340 may be prevented from generating any false-positive keyword detections when the user is not speaking because the ETVAD 320 may prevent the SRE 340 from receiving the microphone input 310 when the user is not speaking. Thus, the SRE 340 may only generate a positive output 342, e.g., the keyword detect, when the user himself or herself has spoken the keyword.

[0037] FIGURE 4 illustrates a second example of a user voice activity detection subsystem 400 in accordance with certain implementations of the disclosed technology. In the example, a microphone input 410 is passed to a ETVAD 420. ETnlike the subsystem 300 illustrated by FIGEIRE 3, however, the microphone input 410 of subsystem 400 is also passed to an SRE 440 regardless of the output of the ETVAD 420. In other words, the SRE 440 of the system 400 is generally constantly processing the microphone input 410. In certain implementations, an AND gate 450 - or other functionally similar structure or operation - may be configured to control a final output 452 of the subsystem 400.

[0038] In the example, the AND gate 450 only passes an output signal 442 from the SRE 440 when both the SRE 440 detects the keyword and when the UVAD 420 determines that the user is actively speaking, e.g., when both output signals 442 and 422 are“1”. Although the output 452 of the system 400 is generally functionally equivalent to the output 342 of the subsystem 300 illustrated by FIGURE 3, the subsystem 400 may use more electrical power due to the SRE 440 being continuously active. Therefore, the subsystem 400 may be a preferred solution for low-power applications.

[0039] The residual noise suppression subsystem 204 illustrated by FIGURE 2 may be configured to attenuate certain portions of the spectrum (e.g., after echo-cancelation) that have poor SNR. The automatic gain control (AGC) subsystem 205 may be configured to process the output of the residual noise suppression subsystem 204. In certain implementations, the AGC subsystem 205 may be tuned to attenuate signals with a root mean square (RMS) envelope below a certain threshold, for example.

[0040] FIGURE 5 illustrates an example of a voice isolation method 500 in accordance with certain implementations of the disclosed technology.

[0041] At 502, an acoustic echo-cancelation subsystem receives a plurality of input signals and generates a plurality of output signals by subtracting an interference component from the input signals. In certain embodiments, the input signals can include any or all of the following: a feedforward signal, a feedback signal, and a headphone audio signal. These signals may be respectively provided by a feedforward microphone, a feedback microphone, and a headphone, for example.

[0042] At 504, an adaptive beamformer subsystem computes a signal-to-noise ratio (SNR) enhanced signal based at least in part on a plurality of output signals received from the acoustic echo-cancelation subsystem.

[0043] At 506, a residual noise suppressor subsystem attenuates at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based on the at least one portion having an SNR below a predetermined SNR threshold.

[0044] At 508, an automatic gain control (AGC) subsystem processes a signal outputted from the residual noise suppressor subsystem and transmits a resulting signal as an output signal. The AGC subsystem may process the signal outputted from the residual noise suppressor subsystem by attenuating the signal with a root mean square (RMS) envelope below a predetermined threshold.

[0045] In certain implementations, a user voice activity detection (UVAD) subsystem may determine whether a user’s speech is present and subsequently provide a control signal based on the determination, as indicated at 510. The UVAD subsystem may provide the output control signal to either or both of the adaptive beamformer subsystem and the residual noise suppressor subsystem.

[0046] It should be noted that voice isolation functionality described herein generally needs only a single earbud to function. In implementations that include two earbuds, embodiments of the disclosed technology may use the microphones of both earbuds simultaneously (e.g., thus utilizing more microphones) or such embodiments may use only the earbud having the better performance between the two. Performance quality could be determined by automatic noise canceling (ANC) attenuation or an on-ear detection metric, for example.

[0047] In certain alternative implementations, the ambient microphone may not need to be for feedforward ANC. Accordingly, in such implementations, the ambient microphone may be a voice microphone worn on a lapel or attached to a cord, for example.

[0048] Certain alternative implementations may include the use of more than two microphones. For example, certain embodiments may use three ambient microphones and one feedback microphone.

[0049] Further alternative implementations may include the use of a close-talk mic boom instead of a feedback microphone. [0050] Certain implementations may additionally use Acoustic Echo Cancelation and a noise suppressor. The noise suppressor include input from a User Voice Activity Detector, and the noise-suppressed signal may further be subject to gain control.

[0051] Embodiments of the invention may be incorporated into integrated circuits such as sound processing circuits, or other audio circuitry. In turn, the integrated circuits may be used in audio devices such as headphones, mobile phones, portable computing devices, sound bars, audio docks, amplifiers, speakers, etc.

[0052] The disclosed aspects may be implemented, in some cases, in hardware, firmware, software, or any combination thereof. The disclosed aspects may also be implemented as instructions carried by or stored on one or more or non-transitory computer-readable media, which may be read and executed by one or more processors. Such instructions may be referred to as a computer program product. Computer-readable media, as discussed herein, means any media that can be accessed by a computing device. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media.

[0053] Additionally, this written description makes reference to particular features. It is to be understood that the disclosure in this specification includes all possible combinations of those particular features. For example, where a particular feature is disclosed in the context of a particular aspect, that feature can also be used, to the extent possible, in the context of other aspects.

[0054] Also, when reference is made in this application to a method having two or more defined steps or operations, the defined steps or operations can be carried out in any order or simultaneously, unless the context excludes those possibilities.

EXAMPLES

[0055] Illustrative examples of the technologies disclosed herein are provided below. An embodiment of the technologies may include any one or more, and any combination of, the examples described below.

[0056] Example 1 includes a voice isolation system comprising: an acoustic echo- cancelation subsystem configured to receive a plurality of input signals, subtract an interference component from the input signals, and provide a plurality of output signals; an adaptive beamformer subsystem configured to receive the plurality of output signals from the acoustic echo-cancelation subsystem and compute a signal-to-noise ratio (SNR) enhanced signal based on the received output signals; a residual noise suppressor subsystem configured to attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based on the at least one portion having an SNR below a predetermined SNR threshold; and an automatic gain control (AGC) subsystem configured to process a signal outputted from the residual noise suppressor subsystem and transmit a resulting signal as an output signal.

[0057] Example 2 includes the voice isolation system of Example 1, wherein the plurality of input signals includes a headphone audio signal.

[0058] Example 3 includes the voice isolation system of any of Examples 1-2, the system further comprising a headphone configured to provide the headphone audio signal.

[0059] Example 4 includes the voice isolation system of any of Examples 1-3, wherein the plurality of input signals includes a feedforward signal and a feedback signal.

[0060] Example 5 includes the voice isolation system of Example 4, the system further comprising: a feedforward microphone configured to provide the feedforward signal; and a feedback microphone configured to provide the feedback signal.

[0061] Example 6 includes the voice isolation system of Example 5, the system further comprising a filter coupled with each of the microphones, wherein each filter is derived from a real-time estimate of signal, interference, and noise spectra for the corresponding microphone.

[0062] Example 7 includes the voice isolation system of any of Examples 1-6, the system further comprising a user voice activity detection (ETVAD) subsystem configured to determine whether a user’s speech is present and provide a control signal based on the determination.

[0063] Example 8 includes the voice isolation system of Example 7, wherein the residual noise suppressor subsystem is further configured to receive the control signal from the ETVAD subsystem and attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based at least in part on the control signal.

[0064] Example 9 includes the voice isolation system of Example 7, wherein the adaptive beamformer subsystem is further configured to receive the control signal from the ETVAD subsystem and compute the SNR-enhanced signal based at least in part on the control signal.

[0065] Example 10 includes the voice isolation system of any of Examples 1-9, wherein the AGC subsystem is configured to process the signal outputted from the residual noise suppressor subsystem by attenuating the signal outputted from the residual noise suppressor subsystem with a root mean square (RMS) envelope below a predetermined threshold.

[0066] Example 1 1 includes a method for voice isolation, said method comprising: an acoustic echo-cancelation subsystem receiving a plurality of input signals; the acoustic echo- cancelation subsystem generating a plurality of output signals by subtracting an interference component from the input signals; an adaptive beamformer subsystem computing a signal-to- noise ratio (SNR) enhanced signal based at least in part on a plurality of output signals received from the acoustic echo-cancelation subsystem; a residual noise suppressor subsystem attenuating at least one portion of the SNR enhanced signal received from the acoustic echo- cancelation subsystem based on the at least one portion having an SNR below a predetermined SNR threshold; an automatic gain control (AGC) subsystem processing a signal outputted from the residual noise suppressor subsystem; and the AGC subsystem transmitting a resulting signal as an output signal.

[0067] Example 12 includes the method of Example 11, the method further comprising a user voice activity detection (ETVAD) subsystem determining whether a user’s speech is present and providing a control signal based on the determination.

[0068] Example 13 includes the method of Example 12, the method further comprising the adaptive beamformer subsystem computing the SNR-enhanced signal based at least in part on the control signal from the ETVAD subsystem.

[0069] Example 14 includes the method of Example 12, the method further comprising the residual noise suppressor subsystem attenuating the SNR enhanced signal received from the acoustic echo-cancelation subsystem based at least in part on the control signal from the ETVAD subsystem.

[0070] Example 15 includes the method of any of Examples 11-14, wherein the plurality of input signals includes a feedforward signal, a feedback signal, and a headphone audio signal.

[0071] Example 16 includes a headset comprising: one or more earphones including one or more sensing components; one or more voice microphones to record a voice signal for voice transmission; and a processor coupled to the earphones and the voice microphones, the processor configured to execute: an acoustic echo-cancelation subsystem to receive a plurality of input signals, subtract an interference component from the input signals, and provide a plurality of output signals; an adaptive beamformer subsystem to receive the plurality of output signals from the acoustic echo-cancelation subsystem and compute a signal-to-noise ratio (SNR) enhanced signal based on the received output signals; a residual noise suppressor subsystem to attenuate at least one portion of the SNR enhanced signal received from the adaptive beamformer subsystem based on the at least one portion having an SNR below a predetermined SNR threshold; and an automatic gain control (AGC) subsystem to process a signal outputted from the residual noise suppressor subsystem and transmit a resulting signal as an output signal. [0072] Example 17 includes the headset of Example 16, wherein the processor is further configured to execute a user voice activity detection (ETVAD) subsystem to determine whether a user’s speech is present and provide a control signal based on the determination.

[0073] Example 18 includes the headset of any of Examples 16-17, wherein the plurality of input signals includes a headphone audio signal.

[0074] Example 19 includes the headset of any of Examples 16-18, wherein the plurality of input signals includes a feedforward signal and a feedback signal.

[0075] Example 20 includes the headset of any of Examples 16-19, wherein the AGC subsystem is configured to process the signal outputted from the residual noise suppressor subsystem by attenuating the signal outputted from the residual noise suppressor subsystem with a root mean square (RMS) envelope below a predetermined threshold.

[0076] Having described and illustrated the principles of the invention with reference to illustrated embodiments, it will be recognized that the illustrated embodiments may be modified in arrangement and detail without departing from such principles, and may be combined in any desired manner. And although the foregoing discussion has focused on particular embodiments, other configurations are contemplated.

[0077] In particular, even though expressions such as“according to an embodiment of the invention” or the like are used herein, these phrases are meant to generally reference embodiment possibilities, and are not intended to limit the invention to particular

embodiment configurations. As used herein, these terms may reference the same or different embodiments that are combinable into other embodiments.

[0078] Although specific examples of the disclosure have been illustrated and described for purposes of illustration, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, the disclosure should not be limited except as by the appended claims.