Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUDIO DEVICE CALIBRATION
Document Type and Number:
WIPO Patent Application WO/2022/164423
Kind Code:
A1
Abstract:
An example non-transitory machine-readable storage medium having stored thereon machine-readable instructions, which when executed by a processor cause the processor to: obtain a user hearing profile for a user of an audio device, the user hearing profile representing the user's preferred sensitivities to audio signals of different frequencies; obtain an audio device profile for the audio device, the audio device profile representing characteristic audio signal responses of the audio device when processing predefined input signals of different frequencies; and calibrate a parameter of an input signal for the audio device based on the audio device profile such that an audio signal response generated by the audio device for the input signal matches the user hearing profile sensitivity.

Inventors:
SUBRAMANIAM RAVI SHANKAR (US)
Application Number:
PCT/US2021/015118
Publication Date:
August 04, 2022
Filing Date:
January 26, 2021
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
HEWLETT PACKARD DEVELOPMENT CO (US)
International Classes:
H04R25/00; A61B5/12
Domestic Patent References:
WO2019216767A12019-11-14
Foreign References:
EP3669780A12020-06-24
US20150281853A12015-10-01
CN108496285B2020-06-05
US20180035216A12018-02-01
EP3131314B12018-09-26
US9782131B22017-10-10
Attorney, Agent or Firm:
CARTER, Daniel J. et al. (US)
Download PDF:
Claims:
CLAIMS 1. A non-transitory machine-readable storage medium having stored thereon machine-readable instructions, which, when executed by a processor, cause the processor to: obtain a user hearing profile for a user of an audio device, the user hearing profile representing the user’s preferred sensitivities to audio signals of different frequencies; obtain an audio device profile for the audio device, the audio device profile representing characteristic audio signal responses of the audio device when processing predefined input signals of different frequencies; and calibrate a parameter of an input signal for the audio device based on the audio device profile such that an audio signal response generated by the audio device for the input signal matches the user hearing profile sensitivity. 2. The non-transitory machine-readable storage medium of claim 1, wherein, to obtain the audio device profile, the instructions cause the processor to request the audio device profile as stored on the audio device. 3. The non-transitory machine-readable storage medium of claim 1, wherein, to obtain the audio device profile, the instructions cause the processor to: emit a test signal from the audio device; obtain, from a microphone, a detected audio response representing the test signal emitted from the audio device; and determine the audio device profile based on the detected audio response. 4. The non-transitory machine-readable storage medium of claim 1, wherein the instructions, when executed, further cause the processor to: obtain, from a microphone, a detected ambient noise signal; and adjust the audio signal parameter such that the audio device profile combined with the detected ambient noise signal matches the user hearing profile. 5. The non-transitory machine-readable storage medium of claim 1, wherein the processor is to obtain the user hearing profile based on a user identifier associated with the user. 6. A method comprising: obtaining a user hearing profile for a user of an audio device, the user hearing profile representing, for each frequency in a set of frequencies, the user’s preferred sensitivity to the frequency; obtaining an audio device profile for the audio device, the audio device profile representing, for each frequency in the set of frequencies, a characteristic audio signal response of the audio device when processing the frequency; determining, for each frequency in the set of frequencies, an adjustment factor to apply to the frequency such that an audio signal response generated by the audio device processing the frequency with the adjustment factor matches the user’s preferred sensitivity for the frequency; and adjusting an audio input signal by applying the adjustment factor to each frequency in the audio input signal such that the adjusted audio input signal suits the user hearing profile. 7. The method of claim 6, wherein obtaining the user hearing profile comprises: obtaining a user identifier for the user; and determining whether a stored user hearing profile exists for the user identifier; when a stored user hearing profile exists, retrieving the stored user hearing profile associated with the user identifier; when a stored user hearing profile does not exist, initiating a hearing test for the user to determine the user hearing profile. 8. The method of claim 6, wherein obtaining the audio device profile comprises: detecting an active audio device; determining whether a stored audio device profile exists for the active audio device; when a stored audio device profile exists for the active audio device, retrieving the stored audio device profile associated with the active audio device; and when a stored audio device profile does not exist, initiating an audio device profile test to determine the audio device profile. 9. The method of claim 8, wherein determining whether a stored audio device profile exists comprises requesting an audio device profile from the active audio device. 10. The method of claim 8, wherein initiating the audio device profile test comprises: for each predefined frequency in the set of frequencies: providing a test audio signal of the predefined frequency to the active audio device for processing and emission; capturing, via a microphone, a test audio signal response generated by the active audio device and representing the test audio signal; and defining the captured test audio signal response as the characteristic audio signal response associated with the predefined frequency.

11. The method of claim 6, further comprising, in response to receiving the audio input signal, applying the adjustment factor to the audio input signal in real-time. 12. The method of claim 11, wherein applying the adjustment factor further comprises: obtaining a detected ambient noise signal from a microphone, the ambient noise signal defining a baseline; determining a modified adjustment factor, which when applied to the audio input signal, causes the audio signal response generated by the audio device processing the audio input signal adjusted by the modified adjustment factor to match the user’s preferred sensitivity above the baseline defined by the ambient noise signal; and applying the modified adjustment factor to the audio input signal. 13. A computing device comprising: a memory storing a repository storing user hearing profiles and audio device profiles; a processor interconnected with an audio device and the memory, the processor to: obtain, from the repository, a user hearing profile for a user of the computing device, the user hearing profile representing the user’s preferred sensitivities to audio signals of different frequencies; obtain, from the repository, an audio device profile for the audio device, the audio device profile representing characteristic audio signal responses of the audio device when processing predefined input signals of different frequencies; in response to receiving an audio input signal for emission at the audio device, calibrate the audio input signal by applying an adjustment factor to each frequency in the audio input signal, wherein the adjustment factor is computed such that, when processing the audio input signal with the applied adjustment factor, the audio device generates an audio signal response matching the user’s preferred sensitivities; and provide the calibrated audio input signal to the audio device for emission. 14. The computing device of claim 13, further comprising a microphone and wherein the processor is further to: emit a test signal from the audio device; obtain from the microphone, a detected audio response representing the test signal emitted from the audio device; determine the audio device profile based on the detected audio response; and store the audio device profile in the repository. 15. The computing device of claim 13, further comprising a microphone and wherein the processor is further to: obtain from the microphone, a detected ambient noise signal; and calibrate the audio input signal by applying a modified adjustment factor to each frequency in the audio input signal, wherein the modified adjustment factor is computed such that, when processing the audio input signal with the applied adjustment factor, the audio device generates an audio signal response matching the user’s preferred sensitivities above the detected ambient noise signal; and provide the audio input signal calibrated with the modified adjustment factor to the audio device for emission.

Description:
AUDIO DEVICE CALIBRATION

BACKGROUND

[0001] Each user, when listening to an audio signal may perceive audio signals differently as well as having different preferences for loudness and power of an audio signal. Audio devices may therefore be calibrated according to a user hearing profile.

BRIEF DESCRIPTION OF THE DRAWINGS

[0002] FIG. 1 is a block diagram of an example non-transitory machine- readable storage medium storing machine-readable instructions for audio device calibration based on a user hearing profile and an audio device profile.

[0003] FIG. 2 is a schematic diagram of an example system for audio device calibration based on a user hearing profile and an audio device profile

[0004] FIG. 3 is a flowchart of an example method of audio device calibration based on a user hearing profile and an audio device profile.

[0005] FIG. 4 is a flowchart of an example method of obtaining a user hearing profile at block 302 of the method of FIG. 3.

[0006] FIG. 5 is a flowchart of an example method of obtaining an audio device profile at block 304 of the method of FIG. 3.

[0007] FIG. 6 is a flowchart of an example method of audio device calibration based on a user hearing profile, an audio device profile and a detected ambient noise signal. DETAILED DESCRIPTION [0008] In order to calibrate audio devices to a user hearing profile, a user hearing test may be conducted to perform a user hearing profile, including, for example, a set of digital signal processing parameters for a sound personalization algorithm. The parameters may be communicated to and utilized by different audio output devices for personalized audio output for the user. However, in addition to different users perceiving audio signals differently, different audio devices processing the audio signals may produce different audio responses given the same audio signal parameters. Since each audio device may produce audio signal responses slightly differently, the calibration of an audio device according to a user hearing profile may be more effective for one audio device than another audio device. [0009] According to the presently described example system, calibration of the audio device may be based on both a user hearing profile of a user and an audio device profile of an audio device. Specifically, the system may account for both the user hearing profile of the user, as well as the specific characteristic audio device profile of the audio device to determine the audio signal parameters to be used with the specific audio device. In particular, the audio device profile may be subtracted from the user hearing profile to determine the audio signal parameters. That is, the computing device providing the input signal may control parameters of the input signal, for example by amplifying the input signal, such that the modified input signal provided to the audio device results in an audio response matching the user hearing profile of the user. [0010] FIG.1 is a block diagram of an example non-transitory machine- readable storage medium 100 storing machine-readable instructions. The storage medium 100 includes user hearing profile instructions 102, audio device profile instructions 104, and audio signal parameter calibration instructions 106. [0011] The user hearing profile instructions 102, when executed, cause a processor of a computing device to obtain a user hearing profile for a user of an audio device. The user hearing profile represents the user’s preferences or preferred sensitivities to audio signals of different frequencies. For example, the user hearing profile may define a preferred sound pressure or energy for each frequency in a set of frequencies. The set of frequencies may be representative of a predefined range of frequencies (e.g., low, medium, high frequency ranges, or other more granularly defined ranges). Thus, the sound pressure defined in the user hearing profile for a given frequency may define the user’s sensitivity to frequencies within the predefined range associated with the given frequency. In other examples, the user hearing profile may also define a degree of separation between sound sources, other psychoacoustic aspects, such as imaging, details, dynamics, soundstage, speed, timbre, in the like. [0012] The processor may obtain the user hearing profile by retrieving the user hearing profile stored in a memory of the computing device based on a user identifier associated with the user. For example, the user may input their user identifier to indicate their identity to the computing device. In other examples, the processor may execute a hearing test taken by the user to obtain the user hearing profile. [0013] The audio device profile instructions 104, when executed, cause the processor to obtain an audio device profile for the audio device. The audio device profile represents characteristic audio signal responses of the audio device when processing predefined input signals of different frequencies. In particular, each audio device may process the same input signal of the same frequency in a slightly different manner based on the construction and tuning of the mechanical and electronic components of the audio device. Thus, each audio device may generate a slightly a different audio signal response given the same input signal. The audio device profile therefore tracks the characteristic audio signal response for each frequency in a set of frequencies. The audio signal response may be defined for a given frequency by the sound pressure or energy produced based on an input signal having the given frequency. [0014] The processor may obtain the audio device profile by requesting and receiving the audio device profile from the audio device itself. That is, the audio device may store an audio device profile and may have communications capabilities (e.g., a Bluetooth device, wireless headphones, or other smart audio devices). Accordingly, the audio device may provide the audio device profile to the processor upon request. [0015] In other examples, the processor may determine the audio device profile by initiating an audio device profile test. In particular, the processor may emit a test signal from the audio device and obtain, at a microphone associated with the computing device, a detected audio response representing the test signal emitted from the audio device. The detected audio response therefore represents the characteristic audio signal response for the frequency associated with the test signal. The characteristic audio signal response for the frequency may then be included in the audio device profile. The processor may repeat the emission of test signals and detection of audio responses until a threshold range of frequencies have been defined in the audio device profile. [0016] The audio signal parameter calibration instructions 106, when executed, cause the processor to calibrate a parameter of an input signal for the audio device based on the audio device profile. Specifically, the parameter is calibrated such that an audio signal response generated by the audio device for the input signal (i.e., the audio signal response generated when the audio device processes the input signal) matches the user hearing profile sensitivity for the input signal. For example, the processor may adjust the amplitude of the input signal provided to the audio device. [0017] For example, if the input signal has a frequency of 110 Hz, and the user’s hearing profile defines a sensitivity of about 60 decibels at 110 Hz, while the audio device profile defines an audio signal response of about 50 decibels at 110 Hz, the amplitude of the input signal may be calibrated to amplify the resulting audio signal response to match the user’s preferred sensitivity of 60 decibels. In other examples, if the user’s hearing profile defines a lack of sensitivity at a given frequency (i.e., the user does not perceive audio signals at the given frequency), the frequency and amplitude of the input signal may be shifted to be within the user’s hearing range, as defined by the user’s hearing profile. Thus, the resulting audio signal response generated by the audio device in response to the calibrated input signal matches the user hearing profile sensitivity for the input signal. [0018] In some examples, the calibration of the input signal may be extended to further calibrate the perceived audio signal response in view of ambient or background noise. For example, the processor may obtain, from a microphone of the computing device, a detected ambient noise signal and further adjust or calibrate the audio signal parameter such that the audio signal response combined with the detected ambient noise signal matches the user hearing profile sensitivity for the input signal. For example, in the above example, the microphone may detect an ambient noise signal defining a baseline. Accordingly, the processor may further adjust the amplitude of the input signal such that the resulting audio signal response matches the user’s preferred sensitivity of 60 decibels above the baseline defined by the detected ambient noise signal. In some examples, the ambient noise adjustment may be performed as a continuous process (e.g., in a control loop) to continuously adjust based on changing ambient conditions. [0019] FIG.2 depicts an example system 200 for calibrating audio devices based on user hearing profiles and audio device profiles. The system 200 includes a computing device 202 such as a desktop computer, a laptop computer, a mobile device, or other suitable computing device. The computing device 202 includes a processor 204, a speaker 208, a microphone 210, and a non-transitory machine-readable storage medium 100. [0020] The processor 204 may include a central processing unit (CPU), a microcontroller, a microprocessor, a processing core, a field-programmable gate array (FPGA), or similar device capable of executing machine-readable instructions. The processor 204 may cooperate with a memory to execute instructions. The memory may include a non-transitory machine-readable storage medium, such as the storage medium 100, that may be electronic, magnetic, optical, or other physical storage device that stores executable instructions. The machine-readable storage medium may include, for example, random access memory (RAM), read-only memory (ROM), electrically-erasable programmable read-only memory (EEPROM), flash memory, a storage drive, an optical disc, and the like. The memory may further store a repository 206 storing user hearing profile definitions for use in the audio device calibration operation described herein. [0021] The speaker 208 is to emit audio signals and the microphone 210 is to capture audio signals from the vicinity of the computing device 202. [0022] The system 200 may also include an external speaker 212 and a headset 214 interconnected with the computing device 202, and more particularly the processor 204, for example via a connection through a port or hub of the computing device 202. In some examples, only one of the external speaker 212 and the headset 214 may be connected to the computing device 202 at a given time. In particular, the external speaker 212 and/or the headset 214 are audio devices which may receive audio input signals from the processor 204 and process the audio input signals to generate audio responses representing the audio input signals. [0023] Referring now to FIG.3, a flowchart of an example method 300 is depicted. The method 300 will be described in conjunction with its performance in the system 200, and in particular by the processor 204, for example via execution of the instructions 102, 104, and 106. In other examples, the method 300 may be performed by other suitable systems or devices. [0024] At block 302, the processor 204 obtains a user hearing profile of a user of the computing device 202. This may be performed, for example upon initialization of the computing device 202, so that any audio cues or indicators generated may be calibrated to the user hearing profile. In other examples, block 302 may be triggered upon initiation of an audio-specific application or program (e.g., a music player or the like). [0025] Specifically, the user hearing profile represents the user’s preferences or preferred sensitivities to audio signals of different frequencies. For example, the user hearing profile may specify a preferred sound pressure or energy for each frequency or range of frequencies in a predefined set of frequencies or range of frequencies. For example, referring to FIG.4, an example method 400 of obtaining the user hearing profile is depicted. [0026] At block 402, the processor 204 obtains a user identifier for the user of the computing device 202. For example, the processor 204 may obtain the user identifier by automatically detecting an active account identifier used to log in to the computing device 202. In other examples, the processor 204 may obtain the user identifier by receiving input from the user. For example, the user may provide input at a graphical user interface of an application for the calibration of audio devices using user hearing profiles and audio device profiles. [0027] At block 404, the processor 204 determines whether a stored user hearing profile exists for the user identifier. For example, the processor 204 may determine whether the repository 206 has a stored user hearing profile associated with the user identifier received at block 402. In other examples, the processor 204 may request the user hearing profile from a repository external to the computing device 202, such as from a central server (e.g., a cloud-based service) or the like. If the determination at block 404 is affirmative, the processor 204 proceeds to block 406. If the determination is negative, the processor 204 proceeds to block 408. [0028] At block 406, having determined that a stored user hearing profile exists, the processor 204 retrieves the stored user hearing profile associated with the user identifier. For example, the processor 204 may retrieve the user hearing profile for the user from the repository 206. In other examples, the processor 204 may request and receive the user hearing profile from an external repository. The processor 204 may then proceed to block 304 of the method 300. [0029] At block 408, having determined that a user hearing profile is not stored for the user identifier, the processor 204 may initiate a hearing test for the user. For example, the processor 204 may control a display of the computing device to display prompts and instructions for the user to carry out the hearing test. [0030] The hearing test may be designed to identify the user’s preferences or preferred sensitivities to audio signals of different frequencies. Thus, the processor 204 may cause one of the audio devices (e.g., the speaker 208) to emit an audio signal of a predefined frequency (or a range of frequencies emitted in a single audio sample) and receiving an indication from the user as to the user’s preferred sound pressure or energy of the audio signal at that frequency (e.g., by dragging a slider until the audio signal reaches a comfortable level, playing the audio signal sequentially at different sound pressures and receiving a selection, or the like). The processor 204 may repeat the emission of an audio signal and receipt of a preferred sound pressure or energy for each frequency in a predefined range or set of frequencies. [0031] The testing range may further be combined with other measurements including physiological and brain function measured in response to different frequencies or audio samples. Such measurements may correlate user indicated preferences with objective measurements of activity. [0032] Based on the indicated preferences, the processor 204 may then define a user hearing profile, associating each tested frequency with the user indicated preferred sensitivity to that frequency. In some examples, the tested frequencies may represent a range of frequencies, and hence the user hearing profile may associate the range of frequencies with the user-indicated preferred sensitivity to the tested frequency. That is, for example, if the tested frequency is 100 Hz and the user-indicated preferred sensitivity to the tested frequency is 60 decibels, the processor 204 may define frequencies within a range of 90 Hz to 110 Hz to correspond to the user-indicated preferred sensitivity of 60 decibels, thereby reducing the number of frequencies to test during the hearing test to obtain a user hearing profile for a full spectrum of frequencies. The processor 204 may additionally obtain an audio device profile of the audio device used to perform the hearing test and subtract the audio device profile from the indicated preferences to obtain a device-independent user hearing profile. As will be appreciated, the user hearing profile may represent both the user’s hearing characteristics and physical limitations as well as the user’s preferences. [0033] At block 410, after completing the hearing test, the processor 204 stores the user hearing profile in association with the user identifier in the repository 206 for future use. In some examples, in addition to storing the user hearing profile locally in the repository 206, the processor 204 may additionally transmit the user hearing profile to a central server (e.g., a cloud-based service) to be stored. The processor 204 may then return to block 304 of the method 300. [0034] At block 304, the processor 204 obtains an audio device profile for the audio device. For example, the processor 204 may perform this step after or concurrently with obtaining the user hearing profile upon initialization of the computing device 202, so that any audio cues or indicators generated may be calibrated according to the audio device profile. In other examples, block 304 may be triggered after or concurrently with obtaining the user hearing profile upon initiation of an audio-specific application or program. [0035] Specifically, the audio device profile represents characteristic audio signal responses of the audio device when processing predefined input signals of different frequencies. That is, the audio device profile may define, for a given frequency in a set of frequencies, the characteristic audio signal response generated by the audio device when processing the given frequency. For example, referring to FIG.5, an example method 500 of obtaining the audio device profile of an audio device is depicted. [0036] At block 502, the processor 204 detects an active audio device. For example, if either the speaker 212 or the headset 214 is connected to the computing device 202, audio signals may be emitted from at least two sources, namely the speaker 212 or the headset 214, or the built-in speaker 208. Since each audio device generates different audio responses, the processor 204 determines which is the active audio device (i.e., the audio device which will be used to emit the audio signals). The processor 204 may identify the active audio device, for example based on an identification of the active audio device by the native operating system. [0037] At block 504, the processor 204 may request an audio device profile from the active audio device. For example, the headset 214 may be a Bluetooth headset having the capability to store an audio device profile thereon. [0038] At block 506, the processor 204 determines whether an audio device profile is received. If an audio device profile is received, the processor 204 has obtained the audio device profile of the active audio device and proceeds to block 306 of the method 300. If an audio device profile is not received, the processor 204 proceeds to block 508. For example, if the Bluetooth headset 214 is not calibrated with an audio device profile, the processor 204 may receive a response that no audio device profile is available. [0039] Similarly, if the active audio device lacks processing capabilities, such as the analog speaker 212, the processor 204 may receive no response to the request for an audio device profile, and hence may determine that no audio device profile is available and/or will be received. In some examples, rather than requesting an audio device profile from an analog audio device, the processor 204 may determine at block 502 that the active audio device is an analog device (e.g., based on identifiers and parameters of the active audio device), and proceed directly to block 508. [0040] At block 508, having determined that a stored audio device profile will not be provided by the active audio device itself, the processor 204 determines whether a stored audio device profile exists for the active audio device. For example, the processor 204 may determine whether the repository 206 has a stored audio device profile associated with an identifier of the audio device. In other examples, the processor 204 may request the audio device profile from a repository external to the computing device 202, such as from a central server (e.g., a cloud-based service) or the like. If the determination at block 508 is affirmative, the processor 204 proceeds to block 510. If the determination is negative, the processor 204 proceeds to block 512. [0041] At block 510, having determined that a stored audio device profile exists, the processor 204 retrieves the stored audio device profile. For example, the processor 204 may retrieve the audio device profile from the repository 206 or from an external repository. The processor 204 may then proceed to block 306 of the method 300. [0042] At block 512, having determined that an audio device profile is not stored for the active audio device, the processor 204 may initiate an audio device profile test for the audio device. The processor 204 may control a display of the computing device to indicate that such a test is being performed. In particular, the audio device profile test may be performed with no user interaction, hence the processor 204 may cause the display to indicate to the user not to interfere with the active audio device while the test is being performed. [0043] The audio device profile test may be designed to identify the characteristic audio signal responses of the audio device when processing audio signals of different frequencies. Thus, the processor 204 may providing a test audio signal of a predefined frequency to the active audio device for processing and emission, and capture via the microphone 210, the corresponding test audio signal response generated by the active audio device. The processor 204 may repeat the input of audio signals and capture of audio signal responses for each frequency in a predefined range or set of frequencies. [0044] Based on the audio signal responses detected by the microphone 210, the processor 204 defines an audio device profile, associating each tested frequency with the characteristic audio signal response detected for that frequency. That is, the processor 204 defines the captured test audio signal response as the characteristic audio signal response associated with the tested frequency. As with the user hearing profile, in some examples, detected audio signal responses may be assumed to apply to a range of frequencies. In other examples, since the audio device profile test does not involve user interaction and may be performed autonomously by the computing device 202, the audio device profile may be more granular (i.e., defining narrower ranges of frequencies to which a single detected audio signal response applies). [0045] At block 514, after completing the audio device profile test, the processor 204 stores the audio device profile in association with an audio device identifier in the repository 206 for future use. In some examples, in addition to storing the audio device profile locally in the repository 206, the processor 204 may additionally transmit the user hearing profile to an external repository or central server. The processor 204 may then return to block 306 of the method 300. [0046] At block 306, the processor 204 determines, for each frequency in a predefined set of frequencies, an adjustment factor to apply to the given frequency for the user-device pairing (i.e., the specific pairing of the user hearing profile obtained at block 302 and the audio device profile obtained at block 304). Specifically, the adjustment factor computed for a given frequency, when applied to an audio input signal, causes the audio device to generate an audio signal response that matches the user’s preferred sensitivity for the given frequency. That is, when processing the frequency with the applied adjustment factor, the audio device generates an audio signal response matching the user’s preferred sensitivity, rather than the characteristic audio response. [0047] For example, the adjustment factor may be computed based on a difference between the preferred sensitivity of the user and the characteristic audio response. For example, considering a frequency of 110 Hz, the user’s hearing profile may define a sensitivity of about 60 decibels at 110 Hz, while the audio device profile may define a characteristic audio signal response of about 50 decibels at 110 Hz. Accordingly, the adjustment factor may be computed as 10 decibels. Similarly, if, at a frequency of 150 Hz, the user’s hearing profile defines a sensitivity of about 40 decibels, while the audio device profile defines a characteristic audio signal response of about 50 decibels, the adjustment factor may be computed as -10 decibels. [0048] The processor 204 may repeat the determination of an adjustment factor for each frequency in the predefined set of frequencies to obtain an adjustment profile for the user-device pairing. [0049] At block 308, in response to receiving an audio input signal to be emitted by the audio device (e.g., a system notification, from an audio-specific application such as a music or media player, or the like), the processor 204 calibrates the audio input signal to suit the user hearing profile. In particular, the processor 204 adjusts the audio input signal using the adjustment factors determined at block 306. That is, for each frequency in the audio input signal, the processor 204 applies the corresponding adjustment factor and provides the calibrated or adjusted audio input signal (i.e., the audio input signal with the applied adjustment factor) to the active audio device for emission. After processing the adjusted audio input signal, the active audio device therefore generates an audio signal response corresponding to the user hearing profile. [0050] Since each user has a different user hearing profile and each audio device has a different audio device profile, the method 300 may be repeated for each user-device pairing. That is, the audio device profiles of the built-in speaker 208, the external speaker 212 and the headset 214 are all different (i.e., each of the built-in speaker 208, the external speaker 212 and the headset 214 may produce different characteristic audio responses when processing the same frequency input signal), and hence the adjustment factors or the adjustment profile to be applied may be different, even when the audio devices are used by the same user with the same user hearing profile. [0051] In some examples, rather than computing an adjustment profile to be applied to an audio input signal, the processor 204 may perform blocks 306 and 308 in real-time when processing the audio input signal. That is, the processor 204 may determine the appropriate adjustment factor to apply to the audio input signal as the audio input signal is being provided to the active audio device for emission. [0052] In such examples, the processor may additionally extend the calibration of the audio input signal to account for ambient or background noise. For example, referring to FIG.6, an example method 600 of calibrating an audio signal in view of ambient noise is depicted. [0053] At block 602, the processor 204 obtains, from the microphone 210, a detected ambient noise signal. [0054] At block 604, the processor 204 determines whether the active audio device includes noise-cancelling capabilities. If the determination is affirmative, the processor 204 may proceed to block 306 to determine the adjustment factor in the regular manner (i.e., in view of the user hearing profile and the audio device profile). If the determination is negative, the processor 204 proceeds to block 606. [0055] At block 606, the processor 204 determines a modified adjustment factor in view of the user hearing profile, the audio device profile, and the detected ambient noise signal obtained at block 602. Specifically, the ambient noise signal may represent a baseline, and hence the processor 204 may compute the modified adjustment factor to cause the audio device to generate an audio signal response matching the user’s preferred sensitivity above the baseline defined by the ambient noise signal. That is, the ambient noise signal subtracts from the audio signal response, and hence the audio input signal is additionally adjusted by a factor corresponding to the ambient noise signal in order to generate an audio signal response matching the user’s preferred sensitivity. [0056] For example, if the input signal has a frequency of 110 Hz, and the user’s hearing profile defines a sensitivity of about 60 decibels at 110 Hz, while the audio device profile defines an audio signal response of about 50 decibels at 110 Hz and the ambient noise signal includes a signal of about 10 decibels at 110 Hz, the effective audio signal response of the audio device is about 40 decibels. That is, the ambient noise signal works to partially cancel out the audio signal response of the audio device profile. Accordingly, the processor 204 may compute a modified adjustment factor of about 20 decibels (i.e., rather than 10 decibels without the ambient noise signal) and apply the modified adjustment factor to increase the amplitude of the input signal to match the user’s preferred sensitivity of 60 decibels above the ambient noise baseline of 10 decibels. [0057] In other examples, rather than amplifying the input signal, audio characteristics of the ambient noise signals may be used to mitigate noise at the certain frequencies of the ambient noise signal to effectively reduce the baseline defined by the ambient noise signal. [0058] At block 608, the processor 204 applies the modified adjustment factor to calibrate the audio input signal to suit the user hearing profile. The method 600 may be applied in real-time when processing an audio input signal for emission at the active audio device to calibrate for changing ambient noise levels. [0059] The scope of the claims should not be limited by the above examples, but should be given the broadest interpretation consistent with the description as a whole.