Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
MICROPHONE CONTROL BASED ON SPEECH DIRECTION
Document Type and Number:
WIPO Patent Application WO/2020/131018
Kind Code:
A1
Abstract:
According to examples, an apparatus may include a processor and a non-transitory computer readable medium on which is stored instructions that the processor may execute to access an audio signal captured by a microphone of a user's speech while the microphone is in a muted state. The processor may also execute the instructions to analyze a spectral or frequency content of the accessed audio signal to determine whether the user was facing the microphone while the user spoke. In addition, based on a determination that the user was facing the microphone while the user spoke, the processor may execute the instructions to unmute the microphone.

Inventors:
BHARITKAR SUNIL (US)
KUTHURU SRIKANTH (US)
Application Number:
PCT/US2018/066069
Publication Date:
June 25, 2020
Filing Date:
December 17, 2018
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO (US)
International Classes:
G10L15/22; H04M3/56; H04S7/00
Foreign References:
US20080167868A12008-07-10
US20080091421A12008-04-17
US6754373B12004-06-22
Other References:
KAYSER HENDRIK ET AL.: "A discriminative learning approach to probabilistic acoustic source localization", 14TH INTERNATIONAL WORKSHOP ON ACOUSTIC SIGNAL ENHANCEMENT (IWAENC, 2014, pages 99 - 103, XP032683899, DOI: 10.1109/IWAENC.2014.6953346
Attorney, Agent or Firm:
WOODWORTH, Jeffrey C. et al. (US)
Download PDF:
Claims:
What is claimed is:

1. An apparatus comprising:

a processor; and

a non-transitory computer readable medium on which is stored instructions that when executed by the processor, are to cause the processor to:

access an audio signal captured by a microphone of a user’s speech while the microphone is in a muted state;

analyze a spectral or frequency content of the accessed audio signal to determine a direction at which the user was facing while the user spoke; and

based on a determination that the user was facing the microphone while the user spoke, unmute the microphone.

2. The apparatus of claim 1 , further comprising:

a communication interface; and

wherein the instructions are further to cause the processor to:

output the captured audio signal through the communication interface based on a determination that the user was facing the microphone while the user spoke.

3. The apparatus of claim 1 , wherein the instructions are further to cause the processor to:

based on a determination that the user was not facing the microphone while the user spoke, maintain the microphone in the muted state.

4. The apparatus of claim 1 , wherein the instructions are further to cause the processor to:

based on a determination that the user was not facing the microphone while the user spoke, discard the captured audio signal.

5. The apparatus of claim 1 , wherein to access the audio signal captured by the microphone, the instructions are further to cause the processor to:

receive a data file including the captured audio signal via a network from a remotely located electronic device; and

output an instruction to the remotely located electronic device via the network to unmute the microphone.

6. The apparatus of claim 1 , wherein the instructions are further to cause the processor to:

access a second audio signal captured by a second microphone of the user’s speech while the second microphone is in a muted state, the second microphone being spaced from the microphone;

analyze characteristics of the audio signal captured by the microphone and the second audio signal captured by the second microphone;

determine whether the user was facing the microphone and the second microphone while the user spoke based on the analyzed characteristics; and determine whether to unmute the microphone and the second microphone based on both the determination that the user was facing the microphone through analysis of the spectral or frequency content of the accessed audio signal and the determination that the user was facing the microphone and the second microphone based on the analyzed characteristics.

7. The apparatus of claim 1 , wherein the instructions are further to cause the processor to:

determine whether the microphone is in the muted state;

analyze the spectral or frequency content of the accessed audio signal to determine whether the user was facing the microphone while the user spoke based on a determination that the microphone is in the muted state; and

output the captured audio signal without analyzing the spectral or frequency content of the captured audio signal based on a determination that the microphone is not in the muted state.

8. The apparatus of claim 1 , wherein the processor is to access audio captured by the microphone during a conference call.

9. A method comprising:

accessing, by a processor, an audio signal captured by a microphone of a user’s voice;

determining, by the processor, whether the microphone was in a muted state when the microphone captured the audio signal of the user’s voice;

based on a determination that the microphone was in the muted state when the microphone captured the audio signal of the user’s voice, applying, by the processor, a machine learning model on the captured audio signal to determine whether the user was likely facing the microphone when the user spoke; and

based on a determination that the user was likely facing the microphone when the user spoke, placing, by the processor, the microphone into an unmuted state.

10. The method of claim 9, further comprising:

outputting the captured audio signal of the user’s voice based on the determination that the user was likely facing the microphone when the user spoke.

1 1. The method of claim 9, further comprising:

receiving a data file including the captured audio signal via a network from a remotely located electronic device; and

outputting an instruction to the remotely located electronic device via the network to unmute the microphone.

12. The method of claim 9, further comprising:

accessing a second audio captured by a second microphone of the user’s voice while the second microphone is in a muted state, the second microphone being spaced from the microphone; analyzing characteristics of the audio captured by the microphone and the second audio captured by the second microphone;

determining whether the user was likely facing the microphone and the second microphone while the user spoke based on the analyzed characteristics; and

determining whether the microphone and the second microphone are to be placed into the unmuted state based on both the determination that the user was facing the microphone through application of the machine learning model and the determination that the user was facing the microphone and the second microphone based on the analyzed characteristics.

13. A non-transitory computer readable medium on which is stored machine readable instructions that when executed by a processor, cause the processor to:

access an audio file of a user’s speech captured by a microphone;

determine whether the microphone was in a muted state when the microphone captured the user’s speech;

based on a determination that the microphone was in the muted state when the microphone captured the user’s speech, apply a machine learning model on the captured user’s speech to determine whether the user was likely facing the microphone when the user spoke, the machine learning model being generated using a classifier that was trained using training data corresponding to user’s directions during the user’s speech; and

based on a determination that the user was likely facing the microphone when the user spoke, output an indication for the user to place the microphone into an unmuted state.

14. The non-transitory computer readable medium of claim 13, wherein the instructions are further to cause the processor to:

output the captured audio file through a communication interface based on a determination that the user was likely facing the microphone when the user spoke.

15. The non-transitory computer readable medium of claim 13, wherein the instructions are further to cause the processor to:

access a second audio captured by a second microphone of the user’s speech while the second microphone is in a muted state, the second microphone being spaced from the microphone;

analyze characteristics of the audio captured by the microphone and the second audio captured by the second microphone;

determine whether the user was facing the microphone and the second microphone while the user spoke based on the analyzed characteristics; and determine whether the microphone is to be placed into the unmuted state or to remain in the muted state based on both the determination that the user was facing the microphone through application of the machine learning model and the determination that the user was facing the microphone and the second microphone based on the analyzed characteristics.

Description:
MICROPHONE CONTROL BASED ON SPEECH DIRECTION

BACKGROUND

[0001] Telecommunications applications, such as teleconferencing and videoconferencing applications, may facilitate communications between multiple remotely located users to communicate with each other over an Internet Protocol network, over a land-based telephone network, and/or over a cellular network. Particularly, the telecommunications applications may cause audio to be captured locally for each of the users and communicated to the other users such that the users may hear the voices of the other users via these networks. Some telecommunications applications may also enable still and/or video images of the users to be captured locally and communicated to the other users such that the users may view the other users via these networks.

BRIEF DESCRIPTION OF DRAWINGS

[0002] Features of the present disclosure are illustrated by way of example and not limited in the following figure(s), in which like numerals indicate like elements, in which:

[0003] FIG. 1 shows a block diagram of an example apparatus that may automatically control unmuting of a microphone based on whether the user was likely facing the microphone while the user spoke;

[0004] FIG. 2A shows a block diagram of an example system that may include features of the example apparatus depicted in FIG. 1 ;

[0005] FIG. 2B shows an example process block diagram of operations that may be performed during a training phase and an inference phase of a captured audio signal;

[0006] FIG. 3 shows a block diagram of an example apparatus that may automatically control unmuting of a microphone based on whether the user was likely facing the microphone when the user spoke;

[0007] FIGS. 4 and 5, respectively, depict example methods for automatically unmuting a microphone based on a determination as to whether a user was facing the microphone when the user spoke; and

[0008] FIG. 6 shows a block diagram of an example non-transitory computer readable medium that may have stored thereon machine readable instructions that when executed by a processor, may cause the processor to prompt a user to unmute a microphone based on a determination that the user was likely facing the microphone when the user spoke. DETAILED DESCRIPTION

[0009] For simplicity and illustrative purposes, the principles of the present disclosure are described by referring mainly to examples thereof. In the following description, numerous specific details are set forth in order to provide an understanding of the examples. It will be apparent, however, to one of ordinary skill in the art, that the examples may be practiced without limitation to these specific details. In some instances, well known methods and/or structures have not been described in detail so as not to unnecessarily obscure the description of the examples. Furthermore, the examples may be used together in various combinations.

[0010] Throughout the present disclosure, the terms "a" and "an" are intended to denote one of a particular element or multiple ones of the particular element. As used herein, the term "includes" means includes but not limited to, the term "including" means including but not limited to. The term "based on" may mean based in part on.

[0011] When audio conferencing applications are activated, the microphones may begin at a muted state. Oftentimes, users may not be aware that their microphones are in the muted state and may thus begin speaking prior to unmuting their microphones. This may result in confusion at the beginning of the teleconference. This may also occur when users intentionally mute their microphones during an audio conference or during other applications and forget to unmute their microphones prior to speaking again.

[0012] Disclosed herein are apparatuses, systems, and methods for automatically unmuting a microphone based on a determination that a user intended for the user’s speech to be captured. For instance, a processor may determine whether the user was facing the muted microphone when the user spoke and based on that determination, automatically unmute the microphone. The processor may make that determination through analysis of a spectral or frequency content of an audio signal captured by the microphone. In addition, or alternatively, the processor may make that determination through application of a machine learning model on the captured audio signal. In some examples, the processor may implement a voice activity detection technique to determine whether the captured audio signal includes a user’s voice. In some examples, the determination as to whether the user was facing the muted microphone may be premised on training a fully-connected neural network (FCNN) or a convolutional neural network (CNN) to identify directivity of speech.

[0013] In some examples, characteristics of a second audio signal captured by a second microphone may be analyzed with the audio signal captured by the microphone to determine whether user was likely facing the microphone and the second microphone while the user spoke. In these examples, the processor may determine whether to unmute the microphone and the second microphone based on the determination as to whether the user was facing the microphone discussed above and the determination based on the analysis of the characteristics of the audio signal and the second audio signal.

[0014] Through implementation of the apparatuses, systems, and methods disclosed herein, a microphone may automatically be unmuted and/or a user may be prompted to unmute the microphone based on a determination that a user was facing the muted microphone when the user spoke. Thus, for instance, the user’s speech may be directed to an application for analysis, storage, translation, or the like. As another example, the user’s speech may be directed to a communication interface to be output during an audio conference. In any regard, the audio captured while the microphone was muted may be stored and used for an application and/or an audio conference, which may reduce additional processing that may be performed to capture, analyze, and store audio that may be repeated in instances in which the previously captured audio is lost or discarded.

[0015] Reference is first made to FIGS. 1 , 2A, and 2B. FIG. 1 shows a block diagram of an example apparatus 100 that may automatically control unmuting of a microphone based on whether the user was likely facing the microphone while the user spoke. FIG. 2A shows a block diagram of an example system 200 that may include features of the example apparatus 100 depicted in FIG. 1 . FIG. 2B shows an example process block diagram 250 of operations that may be performed during a training phase and an inference phase of a captured audio signal 222. It should be understood that the example apparatus 100, the example system 200, and/or the example process block diagram 250 depicted in FIGS. 1 , 2A, and 2B may include additional components and that some of the components described herein may be removed and/or modified without departing from the scopes of the example apparatus 100, the example system 200, and/or the example process block diagram 250 disclosed herein.

[0016] The apparatus 100 may be a computing device or other electronic device, e.g., a personal computer, a laptop computer, a tablet computer, a smartphone, or the like, that may facilitate automatic unmuting of a microphone 204 based on a determination that a user 220 was facing the microphone 204 while the user 220 spoke. That is, the apparatus 100 may capture audio signals 222 of a user’s speech while the microphone 204 is muted and may automatically unmute the microphone 204 based on a determination that the user 220 was facing the microphone 204 while the user 220 spoke. In addition, based on a determination that the user 220 was facing the microphone 204 while the user spoke, the apparatus 100 may store the captured audio 222, may activate a voice dictation application, may communicate the captured audio signal 222 with a remotely located system 240, for instance, via a network 230, and/or the like.

[0017] According to examples, the processor 102 may selectively communicate audio signals, e.g., data files including the audio signals, of the captured audio 222 over a communication interface 208. The communication interface 208 may include software and/or hardware components through which the apparatus 100 may communicate and/or receive data files. For instance, the communication interface 208 may include a network interface of the apparatus 100. The data files may include audio and/or video signals, e.g., packets of data corresponding to audio and/or video signals.

[0018] According to examples, the apparatus 100, and more particularly, a processor 102 of the apparatus 100, may determine whether the audio signals 222 include audio intended by the user 220 to be communicated to another user, e.g., via execution of an audio or video conferencing application, and may communicate the audio signals based on a determination that the user 220 intended for the audio to be communicated to the other user. However, based on a determination that the user may not have intended for the audio to be communicated, the processor 102 may not communicate the audio signals. The processor 102 may determine the user’s intent with respect to whether the audio is to be communicated in various manners as discussed herein.

[0019] As shown in FIG. 1 , the apparatus 100 may include a processor 102 that may control operations of the apparatus 100. The processor 102 may be a semiconductor-based microprocessor, a central processing unit (CPU), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a graphics processing unit (GPU), and/or other hardware device. The apparatus 100 may also include a non-transitory computer readable medium 1 10 that may have stored thereon machine readable instructions 1 12-1 18 (which may also be termed computer readable instructions) that the processor 102 may execute. The non-transitory computer readable medium 1 10 may be an electronic, magnetic, optical, or other physical storage device that includes or stores executable instructions. The non-transitory computer readable medium 1 10 may be, for example, Random Access memory (RAM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a storage device, an optical disc, and the like. The term “non-transitory” does not encompass transitory propagating signals.

[0020] As shown in FIG. 2A, the system 200 may include the processor 102 and the computer readable medium 1 10 depicted in FIG. 1 . The system 200 may also include a data store 202, a microphone 204, output device (or multiple output devices) 206, and a communication interface 208. Electrical signals may be communicated between some or all of the components 102, 1 10, 202-208 of the system 200 via a link 210, which may be a communication bus, a wire, and/or the like. [0021] The processor 102 may execute or otherwise implement a telecommunications application to facilitate a teleconference or a videoconference meeting to which a user 220 may be a participant. The processor 102 may also or alternatively implement another type of application that may use and/or store the user’s speech. In any regard, the microphone 204 may capture audio (or equivalently, sound, audio signals, etc.), and in some examples, may communicate the captured audio 222 over a network 230 via the communication interface 208. The network 230 may be an IP network, a telephone network, and/or a cellular network. In addition, the captured audio 222 may be communicated across the network 230 to a remote system 240 such that the captured audio 222 may be outputted at the remote system 240. The captured audio 222 may be converted and/or stored in a data file and the communication interface 208 may communicate the data file over the network 230.

[0022] In operation, the microphone 204 may capture the audio 222 and may communicate the captured audio 222 to the data store 202 and/or the processor 102. In addition, the microphone 204 or another component may convert the captured audio 222 or may store the captured audio 222 in a data file. For instance, the captured audio 222 may be stored or encapsulated in IP packets. In some examples, the microphone 204 may capture the audio signal 222 while the microphone 204 is in a muted state. That is, while in the muted state, the microphone 204 may continue to capture the audio signals 222 and the processor 102 may continue to process the captured audio signals 222, but may not automatically send the captured audio signals 222 to the communication interface 208. While in an unmuted state, the microphone 204 may capture the audio signals 222 and the processor 102 may process the captured audio 222 and may send the captured audio 222 to the communication interface 208 for the captured audio 222 to be communicated over the network 230.

[0023] The processor 102 may fetch, decode, and execute the instructions 1 12 to access an audio signal 222 captured by the microphone 204 of a user’s 220 speech while the microphone 204 is in a muted state. As discussed herein, while the microphone 204 is in the muted state, the microphone 204 may capture audio signals 222 and may store the captured audio signals 222 in the data stored 202. As such, for instance, the processor 102 may access the captured audio signal 222 from the data store 202.

[0024] The processor 102 may fetch, decode, and execute the instructions 1 14 to analyze a spectral or frequency content of the accessed audio signal 222 to determine a direction at which the user 220 was while the user spoke. That is, for instance, the processor 102 may perform a spectral and/or frequency content analysis of the accessed audio signal 222 to determine whether the user 220 was facing the microphone while the user 220 was spoke. For example, when the user 220 is facing away from the microphone 204, the captured audio 222 may have lower intensities in the high frequency range due to high frequency roll-off. By training a classifier using training data corresponding to speech samples from different directions, e.g., corresponding to the user’s directions during the user’s speech, and different users, the user’s 220 speech direction may be classified as being towards or away from the microphone 204, e.g., to the side of the microphone 204. That is, the ML model may be trained using speech samples of users facing a microphone and speech samples of users not facing the microphone and the ML model may capture differences in the spectral and/or frequency content of the speech samples to be able to distinguish whether the captured audio signal 222 includes spectral and/or frequency content consistent with speech that is of a user facing the microphone 204. This may be particularly useful when the ML model is deployed during inferencing during the start of a conference call on a voice over IP (VoIP) system. The ML model may also be used to switch between mute and unmute during the conference call.

[0025] The ML model may also use input from a voice activity detector (VAD), which may detect the presence or absence of human speech in audio signals. The ML model may employ manually-designed features, over each frame, such as spectral roll-off above a threshold frequency, the mean level in the measured spectrum, the difference spectrum over frames, and/or the like. Alternatively, a deep learning model employing a deep neural network (DNN), such as convolutional neural network (CNN), long short-term memory (LSTM) in cascade with a fully-connected neural network (FCNN), or the like, may be used to automatically extract deep features to train the machine learning model to classify between forward facing and side-facing (with head-motion) profiles.

[0026] The processor 102 may fetch, decode, and execute the instructions 1 16 to, based on a determination that the user 220 was facing the microphone 204 while the user 220 spoke, unmute the microphone 204. The processor 102 may unmute the microphone 204 based on a determination that the user 220 was facing the microphone 204 while the user spoke as that is likely an indication that the user 220 intended for the user’s speech to be captured. In some instances, such as at the beginning of a conference call, the microphone 204 may default to the muted state and the user 220 may begin speaking without first changing the microphone to the unmuted state. As a result, the user 220 may need to repeat what the user 220 said, which the user 220 may find wasteful. Through implementation of the instructions 1 12-1 16, the user’s 220 speech captured while the microphone 204 was muted may still be used when the speech is determined to likely have been made while the user 220 faced the microphone 204, which may enable the user 220 to continue speaking without having to repeat the earlier speech.

[0027] In some examples, the processor 102 may be remote from the microphone 204. In these examples, the processor 102 may access the captured audio signal 222 via the network 230 from a remotely located electronic device that may be connected to the microphone 204. In addition, the processor 102 may output an instruction to the remotely located electronic device via the network 230 to unmute the microphone 204. In response to receipt of the instruction, the remotely located electronic device may unmute the microphone 204.

[0028] The output device(s) 206 shown in the system 200 may include, for instance, a speaker, a display, and the like. The output device(s) 206 may output audio received, for instance, from the remote system 240. The output device(s) 206 may also output images and/or video received from the remote system 240.

[0029] Turning now to FIG. 2B, there is shown an example process block diagram 250 that the processor 102 may execute to determine a direction of user speech. As shown, the processor 102 may operate during a training phase 252 and an inference phase 254. Particularly, audio 222 captured by the microphone 204 may be converted from an analog signal to a digital signal and the digital signal may be filtered 260. During the training phase 252, e.g., a machine learning model training phase 252, feature extraction 262 may be applied on the converted and filtered signal. The extracted features of the converted and filtered signal may be used to generate a speaker model 264. The speaker model 264 may capture differences in the spectral and/or frequency content of converted and filtered signals to be able to distinguish whether the captured audio signal 222 includes spectral and/or frequency content consistent with speech that is of a user facing the microphone 204 or is consistent with speech that is of a user not facing the microphone 204. Thus, for instance, multiple converted and filtered signals may be used to generate the speaker model 264 during the training phase 252.

[0030] During the inference phase 254, features of the converted and filtered signals may be extracted 266. In addition, a deployed directivity model 268 may be applied on the extracted features. The deployed directivity model 268 may be generated using the speaker model 264 and may be used to determine a direction at which the user 220 spoke when the audio 222 was captured. Based on the application of the deployed directivity model 268, a decision 270 may be made as to the direction of the user speech 272, e.g., whether the user was facing the microphone 204 when the audio 222 was captured. In addition, the direction of the user speech 272 may be outputted, e.g., may be outputted to control operation of the microphone 204. As discussed herein, the direction of the user speech 272 may be used to determine whether a muted microphone 204 is to be unmuted. [0031] Reference is now made to FIGS. 1 -3. FIG. 3 shows a block diagram of an example apparatus 300 that may automatically control unmuting of a microphone 204 based on whether the user 220 was likely facing the microphone 204 when the user 220 spoke. It should be understood that the example apparatus 300 depicted in FIG. 3 may include additional components and that some of the components described herein may be removed and/or modified without departing from the scope of the example apparatus 300 disclosed herein.

[0032] The apparatus 300 may be similar to the apparatus 100 depicted in FIG. 1 and may thus include a processor 302, which may be similar to the processor 102, and a non-transitory computer readable medium 310, which may be similar to the non-transitory computer readable medium 1 10. The computer readable medium 310 may have stored thereon machine readable instructions 312-322 (which may also be termed computer readable instructions) that the processor 302 may execute.

[0033] The processor 302 may fetch, decode, and execute the instructions 312 to access to access an audio signal 222 captured by the microphone 204 of a user’s 220 speech. As discussed herein, the microphone 204 may capture the audio signal 222 while the microphone 204 is in the muted state. In addition, the microphone 204 may capture audio signals 222 and may store the captured audio signals 222 in the data stored 202. As such, for instance, the processor 102 may access the captured audio signal 222 from the data store 202. In other examples in which the processor 302 is remote from the microphone 204, the processor 302 may access the audio signal 222 via the network 230.

[0034] The processor 302 may fetch, decode, and execute the instructions 314 to determine whether the user 220 was facing the microphone 204 when the user 220 spoke, e.g., generated the captured audio 222. As discussed herein, the processor 302 may perform a spectral and/or frequency content analysis of the accessed audio signal 222 to determine whether the user 220 was facing the microphone while the user 220 spoke. In addition or alternatively, the processor 302 may apply a machine learning model as discussed herein on the captured audio signal 222 to determine whether the user 220 was likely facing the microphone 204 when the user 220 spoke.

[0035] In some examples, the processor 302 may determine whether the microphone 204 was in a muted state when the microphone 204 captured the audio signal 222 of the user’s 220 voice. In these examples, the processor 302 may determine whether the user 220 was facing the microphone 204 when the user 220 spoke based on a determination that the microphone 204 was in the muted state. In addition, the processor 302 may output the captured audio signal 222 without analyzing the spectral or frequency content of the captured audio signal 222 based on a determination that the microphone 204 was not in the muted state, e.g., was in the unmuted state, when the microphone 204 captured the audio signal 222 of the user’s 220 voice.

[0036] The processor 302 may fetch, decode, and execute the instructions 316 to, based on a determination that the user 220 was facing the microphone 204 while the user 220 spoke, unmute the microphone 204. The processor 302 may unmute the microphone 204 based on a determination that the user 220 was facing the microphone 204 while the user spoke as that is likely an indication that the user 220 intended for the user’s speech to be captured.

[0037] The processor 302 may fetch, decode, and execute the instructions 318 to, based on a determination that the user 220 was facing the microphone 204 while the user 220 spoke, output the captured audio signal 222. For instance, the processor 302 may output the captured audio signal 222 to the communication interface 208 such that the communication interface 208 may output the captured audio signal 222 to the remote system 240 via the network 230. In addition or alternatively, the processor 302 may output the captured audio signal 222 to an application or device for the captured audio signal 222 to be stored, translated, or the like.

[0038] The processor 302 may fetch, decode, and execute the instructions 320 to, based on a determination that the user 220 was not facing the microphone 204 while the user 220 spoke, maintain the microphone 204 in the muted state and/or discard the captured audio signal 222. That is, for instance, in addition to maintaining the microphone 204 in the muted state, the processor 302 may not output the captured audio signal 222 based on a determination that the user 220 was not facing the microphone 204 while the user 220 spoke.

[0039] The processor 302 may fetch, decode, and execute the instructions 322 to access a second audio signal 224 captured by a second microphone 226 of the user’s 220 speech while the second microphone 226 is in a muted state, the second microphone 226 being spaced from the microphone 204. For instance, the second microphone 226 may be positioned at least a few inches from the microphone 204 such that sound waves may reach the second microphone 226 at a different time than the microphone 204 in instances in which the user 220 is facing to a side of one or both of the microphone 204 and the second microphone 226. By way of a particular example, the second microphone 226 may, for instance, be positioned on one side of a laptop computing device and microphone 204 may be positioned on an opposite side of the laptop.

[0040] The processor 302 may fetch, decode, and execute the instructions 314 to analyze characteristics of the audio signal 222 captured by the microphone 204 and the second audio signal 224 captured by the second microphone 226. For instance, the processor 302 may determine the timing at which the microphone 204 captured the audio signal 222 and the timing at which the second microphone 226 captured the second audio signal 224. For instance, the processor 302 may implement a time difference of arrival technique to detect the direction of the captured audio 222, 224.

[0041] The processor 302 may fetch, decode, and execute the instructions 314 to also determine whether the user 220 was facing the microphone 204 and the second microphone 226 while the user 220 spoke based on the analyzed characteristics. For instance, the processor 302 may determine that the user 220 was facing the microphone 204 and the second microphone 226 when the user 220 spoke based on a determination that the microphone 204 captured the audio signal 222 within a predefined period of time that the second microphone 226 captured the second audio 224. The predefined period of time may be based on testing and/or training using various user speech. In addition, the processor 302 may determine that the user 220 was not facing the microphone 204 and the second microphone 226 based on a determination that the microphone 204 captured the audio signal 222 outside of the predefined period of time that the second microphone 226 captured the second audio 224.

[0042] The processor 302 may fetch, decode, and execute the instructions 314 to further determine whether to unmute the microphone 304 and the second microphone 226 based on both the determination that the user 220 was facing the microphone 204 through analysis of the spectral or frequency content of the accessed audio signal 222 and the determination that the user 220 was facing the microphone 204 and the second microphone 226 based on the analyzed characteristics.

[0043] Various manners in which the apparatuses 100, 300 may be implemented are discussed in greater detail with respect to the method 400 depicted in FIGS. 4 and 5. Particularly, FIGS. 4 and 5, respectively, depict example methods 400 and 500 for automatically unmuting a microphone 204 based on a determination as to whether a user 220 was facing the microphone 204 when the user 220 spoke. It should be apparent to those of ordinary skill in the art that the example methods 400 and 500 may represent generalized illustrations and that other operations may be added or existing operations may be removed, modified, or rearranged without departing from the scopes of the methods 400 and 500.

[0044] The descriptions of the methods 400 and 500 are made with reference to the apparatuses 100, 300 illustrated in FIGS. 1-3 for purposes of illustration. It should be understood that apparatuses having other configurations may be implemented to perform the methods 400 and/or 500 without departing from the scopes of the method 400 and/or the method 500. [0045] At block 402, the processor 102, 302 may access an audio signal 222 captured by a microphone 204 of a user’s 220 voice. At block 404, the processor 102, 302 may determine whether the microphone 204 was in a muted state when the microphone 204 captured the audio signal 222 of the user’s 220 voice. Based on a determination that the microphone 204 was not in the muted state when the microphone 204 captured the audio signal 222 of the user’s 220 voice, at block 406, the processor 102, 302 may output the captured audio signal 222. The processor 102, 302 may output the captured audio signal 222 in any of the manners described herein.

[0046] However, based on a determination that the microphone 204 was in the muted state when the microphone 204 captured the audio signal 222 of the user’s voice, at block 408, the processor 102, 302 may apply a machine learning model on the captured audio signal 222 to determine whether the user 220 was likely facing the microphone 204 when the user 220 spoke. Based on a determination that the user 220 was likely facing the microphone 204 when the user 220 spoke, at block 412, the processor 102, 302 may unmute the microphone 204. In addition, the processor 102, 302 may output the captured audio signal at block 406. However, based on a determination that the user 220 was likely not facing the microphone 204 when the user 220 spoke, at block 414, the processor 102, 302 may discard the captured audio signal 414.

[0047] Turning now to FIG. 5, at block 502, the processor 102, 302 may access audio signals captured by a microphone 204 and a second microphone 226. That is, the processor 102, 302 may access an audio signal, e.g., an audio file containing the audio signal, captured by the microphone 204 and a second audio signal 224 captured by the second microphone 226. In some examples, the audio signals 222, 224 may be captured while the microphone 204 and the second microphone 226 are each in a muted state. In addition, the second microphone 226 may be spaced from the microphone 204 as discussed herein.

[0048] At block 504, the processor 102, 302 may analyze characteristics of the audio 222 captured by the microphone 204 and the second audio 224 captured by the second microphone 226. For instance, the processor 102, 302 may analyze the captured audio 222, 224 to determine the timings at which the audio signals 222, 224 were captured.

[0049] At block 506, the processor 102, 302 may determine whether the user 220 was likely facing the microphone 204 and the second microphone 226 while the user 220 spoke based on the analyzed characteristics. For instance, the processor 102, 302 may determine that the user was likely facing the microphone 204 and the second microphone 226 based on the timings being within a predefined time period.

[0050] At block 508, the processor 102, 302 may determine whether the microphone 204 and the second microphone 226 are to be placed into the unmuted state based on both the determination that the user 220 was facing the microphone 204 through application of the machine learning model and the determination that the user 220 was facing the microphone 204 and the second microphone 226 based on the analyzed characteristics. That is, the processor 102 302 may determine that the user 220 was facing the microphone 204 and the second microphone 226 when the user 220 has both been determined to likely have been facing the microphone 204 when the user spoke through application of the machine learning model and through analysis of the audio signals 222 and 224. However, the processor 102, 302 may determine that the user 220 was not facing the microphone 204 or the second microphone 226 when the user 220 has not been determined to likely have been facing the microphone 204 when the user spoke through application of the machine learning model or through analysis of the audio signals 222 and 224.

[0051] Based on a determination that the user 220 was likely facing the microphone 204 and the second microphone 226 when the user 220 spoke, at block 510, the processor 102, 302 may unmute the microphone 204 and the second microphone 226. However, based on a determination that the user 220 was likely not facing the microphone 204 and the second microphone 226 when the user 220 spoke, at block 512, the processor 102, 302 may discard the captured audio signal 222 and the second captured audio signal 224. [0052] Some or all of the operations set forth in the methods 400 and/or 500 may be included as utilities, programs, or subprograms, in any desired computer accessible medium. In addition, some or all of the operations set forth in the methods 400 and/or 500 may be embodied by computer programs, which may exist in a variety of forms both active and inactive. For example, they may exist as machine readable instructions, including source code, object code, executable code or other formats. Any of the above may be embodied on a non-transitory computer readable storage medium. Examples of non-transitory computer readable storage media include computer system RAM, ROM, EPROM, EEPROM, and magnetic or optical disks or tapes. It is therefore to be understood that any electronic device capable of executing the above-described functions may perform those functions enumerated above.

[0053] Turning now to FIG. 6, there is shown a block diagram of an example non-transitory computer readable medium 600 that may have stored thereon machine readable instructions that when executed by a processor, may cause the processor to prompt a user 220 to unmute a microphone 204 based on a determination that the user 220 was likely facing the microphone 204 when the user spoke. It should be understood that the non-transitory computer readable medium 600 depicted in FIG. 6 may include additional instructions and that some of the instructions described herein may be removed and/or modified without departing from the scope of the non-transitory computer readable medium 600 disclosed herein. The description of the non-transitory computer readable medium 600 is made with reference to the apparatuses 100, 300 illustrated in FIGS. 1-3 for purposes of illustration.

[0054] The non-transitory computer readable medium 600 may have stored thereon machine readable instructions 602-608 that a processor, such as the processor 102 depicted in FIG. 1 , may execute. The non-transitory computer readable medium 600 may be an electronic, magnetic, optical, or other physical storage device that contains or stores executable instructions. The non-transitory computer readable medium 600 may be, for example, Random Access memory (RAM), an Electrically Erasable Programmable Read- Only Memory (EEPROM), a storage device, an optical disc, and the like. The term“non-transitory” does not encompass transitory propagating signals.

[0055] The processor may fetch, decode, and execute the instructions 602 to access an audio file of a user’s speech captured by a microphone 204. The processor may fetch, decode, and execute the instructions 604 to determine whether the microphone 204 was in a muted state when the microphone 204 captured the user’s speech. The processor may fetch, decode, and execute the instructions 606 to, based on a determination that the microphone 204 was in the muted state when the microphone 204 captured the user’s speech, apply a machine learning model on the captured user’s speech to determine whether the user was likely facing the microphone 204 when the user spoke, the machine learning model being generated using a classifier that was trained using training data corresponding to user’s directions during the user’s speech. In addition, the processor may fetch, decode, and execute the instructions 608 to, based on a determination that the user 220 was likely facing the microphone 204 when the user 220 spoke, output an indication for the user 220 to place the microphone 204 into an unmuted state.

[0056] Although not shown in FIG. 6, the non-transitory computer readable medium may also include instructions that may cause the processor to output the captured audio file through a communication interface based on a determination that the user was likely facing the microphone when the user spoke. In addition, or alternatively, the non-transitory computer readable medium may also include instructions that may cause the processor to access a second audio captured by a second microphone 226 of the user’s speech while the second microphone 226 is in a muted state, the second microphone 226 being spaced from the microphone 204, analyze characteristics of the audio captured by the microphone 204 and the second audio captured by the second microphone 226, determine whether the user 220 was facing the microphone 204 and the second microphone 226 while the user 220 spoke based on the analyzed characteristics, and determine whether the microphone 204 is to be placed into the unmuted state or to remain in the muted state based on both the determination that the user 220 was facing the microphone 204 through application of the machine learning model and the determination that the user 220 was facing the microphone 204 and the second microphone 226 based on the analyzed characteristics.

[0057] Although described specifically throughout the entirety of the instant disclosure, representative examples of the present disclosure have utility over a wide range of applications, and the above discussion is not intended and should not be construed to be limiting but is offered as an illustrative discussion of aspects of the disclosure.

[0058] What has been described and illustrated herein is an example of the disclosure along with some of its variations. The terms, descriptions and figures used herein are set forth by way of illustration only and are not meant as limitations. Many variations are possible within the scope of the disclosure, which is intended to be defined by the following claims - and their equivalents - in which all terms are meant in their broadest reasonable sense unless otherwise indicated.