Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
USER INTERFACE FOR DYNAMICALLY ADJUSTING SETTINGS OF HEARING INSTRUMENTS
Document Type and Number:
WIPO Patent Application WO/2021/026126
Kind Code:
A1
Abstract:
A user interface for determining a setting of a hearing instrument is disclosed. The user interface allows a user to configure a hearing instrument using control indicators and markers that may he manipulated. The hearing instruments may reference a mapping that links marker values to hearing thresholds with respect to particular frequency bands. Processors of the hearing instruments or of computing devices paired to a hearing instrument may use the hearing thresholds of a user identified across multiple frequency bands to determine a setting for configuring the hearing instruments.

Inventors:
RECKER KARRIE (US)
Application Number:
PCT/US2020/044847
Publication Date:
February 11, 2021
Filing Date:
August 04, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
STARKEY LABS INC (US)
International Classes:
H04R25/00
Domestic Patent References:
WO2010091480A12010-08-19
Foreign References:
JP2012182647A2012-09-20
US5835611A1998-11-10
US20170046120A12017-02-16
US201962882878P2019-08-05
Attorney, Agent or Firm:
ABEYTA, Derek M. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method comprising: providing a user interface by a device configured to interface with a hearing instrument, the user interface comprising a plurality of control indicators that each correspond to a frequency band, the control indicators each comprising markers that are individually positioned along the control indicators to indicate marker values; determining an initial marker value for a first control indicator based at least in part on an initial position of a first marker; determining that a change in state has occurred with respect to the initial marker value; determ ining a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker; accessing a mapping that identifies the one or more relationships between marker values and hearing thresholds; identifying, from the mapping, a hearing threshold that corresponds to the first adjusted marker value; determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold; and storing the one or more settings for the hearing instrument to a memory' device.

2. Idle method of claim 1, further comprising: adjusting a number of control indicators based on input received from a user.

3. The method of any of claims 1-2, wherein the control indicators comprise interactive graphical units presented to a user via the user interface, wherein the markers are configured to be slid along the control indicators

4. Idle method of any of claims 1-3, wherein the mapping links marker values to hearing thresholds with respect to the frequency band that corresponds to the first control indicator.

5. The method of any of claims 1-4, wherein the mapping links m arker values to hearing thresholds with respect to each of the frequency hands that correspond to the plurality of control indicators.

6. The method of any of claims 1-5, wherein identifying the hearing threshold further comprises estimating, from the mapping, a hearing threshold data point from one or more other hearing threshold data points.

7. The method of any of claims 1-6, further comprising: in response to receiving an indication to further adj ust the first marker, determining a second adjusted marker value; and updating the one or more settings based at least in part on the second adjusted marker value.

8. The method of any of claims 1-7, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to a particular frequency band.

9. The method of any of claims 1-8, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument wi thin a single frequency region or band.

10. The method of any of claims 1-9, further comprising: providing a second user interface identifying at least one additional control indicator and at least one additional marker; detecting the adjustment to the at least one additional marker; and updating the one or more settings in response to detecting the adjustment to the at least one additional marker.

11. The method of any of claims 1-10, further comprising: determining a hearing threshold step-size that corresponds to a particular marker value adjustment; and detecting an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.

12. The method of any of claims 1-11, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibe! change in hearing threshold, where ''X” corresponds to a predetermined decibel value.

13. The method of any of claims 1-12, further comprising: applying a transfer function to the one or more settings to determine a second set of one or more settings.

14. A device comprising means for performing the methods of any of claims 1-13, the device comprising: a memory configured to store the mapping: and one or more processors coupled to the memory, and configured to perform the methods of any of claims 1—13.

15. A non-transitory computer-readable storage medium having instructions stored thereon that, when executed, cause one or more processors to perform the methods of any of claims 1-13.

Description:
USER INTERFACE FOR DYNAMICALLY ADJUSTING SETTINGS OF HEARING INSTRUMENTS

[0001] This application claims the benefit of U.S Provisional Application No. 62/882,878, filed August 5, 2019, the entire contents of which are hereby incorporated in their entirety as though set forth fully herein.

TECHNICAL FIELD

[0002] This disclosure relates to hearing instruments.

BACKGROUND

[0003] Hearing instruments are devices designed to be worn on, in, or near one or more of a user’s ears. Common types of hearing instruments include hearing assistance devices (e.g , ‘"hearing aids”), earbuds, headphones, hearables, cochlear implants, and so on. In some examples, a hearing instrument may be implanted or osseointegrated into a user. Some hearing instruments include additional features beyond just environmental sound-amplification. For example, some modem hearing instruments include advanced audio processing for improved device functionality, controlling and programming the devices, and beamforming, and some can even communicate wirelessly with external devices including other hearing instruments (e.g., for streaming media).

SUMMARY

[0004] Over-the-counter (QTC) and direct-to-consumer (DTC) hearing aid users face many technical challenges with existing self-fitting strategies. Strategies that are intuitive to these users are desirable. In this disclosure, techniques drat allow' for on-the-fly configuration of hearing instruments using information received from a user are introduced. The disclosed techniques allow hearing instruments, as well as other companion devices, to identify hearing thresholds of the user at various frequencies. In turn, the hearing instruments and other devices can determine fitting information for the hearing instalments without having to display to the user information regarding each parameter that may adjusted to achieve a hearing instrument setting capable of accommodating the hearing thresholds of the user. The disclosed techniques allow for the conservation of memory and power resources, for example, by limiting the amount of information necessary to be displayed to the user and the amount of input necessary ' to be received from the user in order for a hearing instrument to determine such settings. [0005] This disclosure describes techniques for providing a user interface (UI) that allows a user to dynamically adjust setings of a hearing instalment. For example, settings for hearing instruments may be adjusted to correspond to hearing thresholds for a user at various frequency bands. A hearing threshold of a user corresponds to the minimum setting of a hearing instalment at which a user can perceive sound with respect to frequency band, region or discrete frequencies. For example, a hearing threshold of a user rnay correspond to the minimum setting of a hearing instalment at which a user can perceive sound with respect to a frequency band and/or an adjacent region thereto. In other words, a hearing threshold of a user may correspond to the minimum setting at which a user can perceive sound at least with respect to a particular frequency band (e.g., with latitude on either side of the band to encompass outer frequency regions that encompass or surround the band).

[0006] A hearing threshold may be expressed in terms of decibels (dB). A user may use the UI to identify hearing thresholds at one or more frequency bands, where a frequency band includes a region of frequencies, typically defined by a iow'er boundary (e.g., 1000 Hertz (Hz)) and an upper boundary ' (e.g., 2000 Hz).

[0007] A hearing instrument may be set to one or more profile settings for different frequencies in order for the user to perceive sound at each frequency. For example, a user having different hearing thresholds at different frequencies may use the UI to determine a setting of the hearing instrument tailored to correct for the hearing thresholds of the user.

[0008] In some examples, a profile seting for a hearing instrument may include a combination of adjustable audio setings. In some examples, a hearing instrument may have one or more profile settings, in accordance with techniques described herein. For example, a hearing instrument may have a first profile setting directed to a first set of frequencies (e.g., frequencies within and adjacent to a first frequency band) and/or a second profile setting directed to a second set of frequencies (e.g., frequencies within and adjacent to a second frequency band).

[0009] In some examples, a single profile setting may correspond to a particular environment. A person skilled in the art will understand that a profile setting may be determined for particular environments, discrete frequencies, frequency regions (e.g., frequencies within and adjacent to a frequency band), different hearing instruments (e.g., left hearing instrument and/or right hearing instrument), or any combination thereof.

[0010] The one or more profile settings may include a combination of adjustments to gain, compression, and frequency response for sounds across an entire frequency spectrum, where the entire frequency spectrum may be separated into identifiable frequency bands and where each profile setting may target individual frequency bands. In some examples, a profile seting may include the activation of certain audio features, such as frequency translation, noise reduction, and/or directional microphones.

[0011] Gain generally refers to the volume of sound within a frequency band. For example, gain may refer to the difference between the input dB Sound Pressure Level (SPL) and the output dB SPL of a signal.

[0012] Compression generally refers to how a hearing instrument redistributes loudness within a frequency band. For instance, a user may not be able to perceive sounds in a particular frequency that are quieter than a particular threshold. Thus, a hearing instrument can increase the volume of those quieter sounds to approximate that threshold. At the same time, however, the hearing instrument cannot simply allow sounds that are naturally at the particular threshold to remain at the particular threshold because that would eliminate the distinction in volume between sounds naturally at the threshold and sounds that are naturally below the threshold. To keep the distinction, the hearing instrument may increase the volume of sounds that naturally occur at the threshold to something louder. However, the hearing instrument may not increase all sounds at a frequency by the same amount because that would result in uncomfortably loud sounds in that frequency (e.g., sounds that are already loud would become too loud). So, the hearing instrument can compress the volume range for the frequency into a range of volumes that the user can perceive comfortably.

[0013] In some examples, compression refers to the variation in effective gain applied to a signal as a function of the magnitude of the signal. The effective gain may be greater for small, rather than for large signals. Further, a compression ratio may be adjusted as part of the profile settings, compression ratio generally referring to the ratio of (1) the magnitude of the gain (or amplification) at a reference signal level to (2) the magnitude of the signal at a higher stated signal level . Fitting formulas may use identified hearing thresholds to make adjustments to compression and/or a compression ratio as part of a profile setting.

[0014] Frequency response generally refers to the relationship between amplitude or gain of a signal and frequency. In some examples, the frequency response may refer to a frequency-response curve. A frequency-response curve may plot frequency against amplitude or gain and control the frequency response of a signal. Compression parameters or settings for a frequency band may affect the frequency response for the frequency band. In some examples, frequency response refers to the output level of X for a frequency when given an input level of Y for the frequency.

[0015] In some examples, the disclosed techniques may allow a hearing instrument to update multiple settings for one or more frequencies based on a single input from a user. This may include adjustments to the gain, compression, frequency response and other hearing instrument parameters and features (e.g., frequency translation/compression, noise reduction, directional microphones, etc.). Although a user may provide input directed toward specific frequency bands, the ultimate adjustments may manifest as adjustments to hearing instrument settings that affect multiple discrete frequencies or frequency regions within that frequency band, as well as frequency regions adjacent to the frequency band. That is, the UI may only present limited frequency band information as a visual aid for the user, where those frequency bands do not necessarily correspond to the entire frequency spectrum for which parameters and features would be adjusted. The adjusted parameters may be captured as one profile setting or multiple profile settings.

[0016] In some examples, a processor of a hearing instrument may apply the profile settings directly to audio signals as the signals enter the hearing instrument. In other examples, a secondary device (e.g., a smartphone, smart television, radio, mobile device, etc.) may apply the profile settings to condition an outgoing audio signal, where the secondary device may then transmit the conditioned audio signal to the hearing instruments. In the latter example, the user rnay make multiple adjustments to determine a profile setting, at which point, any parameters used to determine the profile setting could be transmitted to and programmed into the hearing instruments. In some examples, processor(s) may apply transfer functions to the profile setings to account for differences between live signals and streamed signals. The hearing instalment may use less power and memory ' resources than the hearing instrument would otherwise use while, for example, continuously modifying the hearing instrument parameters as audio signals enter the hearing instruments.

[0017] The disclosed techniques provide a mapping of individual hearing thresholds that correspond to control indicator marker values presented via a UI that a user may adjust. The control indicators may be each allocated to various frequency bands such that the mapping is specific to those frequency bands. By adjusting a marker of a control indicator on a UI, a user may simultaneously affect multiple sound parameters of incoming sound media, such as gain, compression, and frequency response, without having to manually adjust each of those parameters individually. In this way, the user may be able to self-program the hearing instruments to compensate his/her hearing loss with satisfaction.

[0018] In one example, a method is disclosed, the method including providing a user interface by a device configured to interface with a hearing instrument, the user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values, determining an initial marker value for a first control indicator based at least in part on an initial position of a first marker, determining that a change in state has occurred with respect to the initial marker value, determining a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker, accessing a mapping that identifies the one or more relationships between marker values and hearing thresholds, identifying, from the mapping, a hearing threshold that corresponds to the first adjusted marker value, determining a setting to configure the hearing instrument based at least in part on the hearing threshold, and storing the one or more settings for the hearing instrument to a memory device.

[0019] In another example, a device configured to determine hearing instrument setings is disclosed. The device includes a memory ' configured to store a mapping that identifies one or more relationships between marker values and hearing thresholds, and one or more processors coupled to the memory, and configured to provide a user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values, determine an initial marker value for a first control indicator based at least in part on an initial position of a first marker, determine that a change in state has occurred with respect to the initial marker value, determine a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker, access the mapping that identifies the one or more relationships between marker values and hearing thresholds, identify, from the mapping, a hearing threshold that corresponds to the first adjusted marker value, and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.

[0020] In yet another example, a method is disclosed, the method including providing a user interface by a device configured to interface with a hearing instrument, the user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker positioned along the control indicator to indicate a marker value, determining an initial marker value for the control indicator based at least in part on an initial position of tire marker, determining an adjusted marker value for the control indicator based at least m part on an adjusted position of the marker, accessing a mapping that identifies one or more relationships between marker values and hearing thresholds, identifying, from the mapping, a hearing threshold that corresponds to the adjusted marker value, and determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold.

[0021] In another example, a device configured to determine hearing instrument settings is disclosed, the device including a memory configured to store a mapping that identifies one or more relationships between marker values and hearing thresholds, and one or more processors coupled to the memory, and configured to provide a user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker that is positioned along the control indicator to indicate a marker value, determine an initial marker value for the control indicator based at least in part on an initial position of the m arker, determine an adjusted marker value for the control indicator based at least in part on an adjusted position of the marker, access the mapping that identifies the one or more relationships between marker values and hearing thresholds, identify, from the mapping, a hearing threshold that corresponds to the adjusted marker value, and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.

[0022] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description, drawings, and claims.

BRIEF DESCRIPTION OF DRAWINGS

[0023] FIG. 1 is a conceptual diagram illustrating an example system that includes one or more hearing instrument(s), in accordance with one or snore techniques of this disclosure

[0024] FIG. 2 is a block diagram illustrating example components of a hearing instrument, in accordance with one or more aspects of this disclosure.

[0025] FIG. 3 is a block diagram illustrating example components of a computing device, in accordance with one or more aspects of this disclosure.

[0026] FIG. 4A is a sample user interface (UI) illustrating example control indicators, in accordance with one or more aspects of this disclosure.

[0027] FIG. 4B is an illustrative visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.

[0028] FIG. 4C is an example of a predicted real -ear response, in accordance with one or more aspects of this disclosure.

[0029] FIG. 5 A is a sample UI illustrating example control indicators, in accordance with one or more aspects of this disclosure.

[0030] FIG. 5B is an illustrati ve visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.

[0031] FIG. 5C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.

[0032] FIG. 5D is an illustrative visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.

[0033] FIG. 5 E is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.

[0034] FIG. 6A is a sample UI illustrating example control indicators, in accordance with one or more aspects of this disclosure.

[0035] FIG. 6B is an illustrative visual depiction of a hearing threshold map, m accordance with one or more aspects of this disclosure. [0036] FIG. 6C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.

[0037] FIG. 7A is a sample UI illustrating example control indicators, in accordance with one or more aspects of tins disclosure.

[0038] FIG. 7B is an illustrative visual depiction of a hearing threshold map, in accordance with one or more aspects of this disclosure.

[0039] FIG. 7C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.

[0040] FIG. 8A is a sample UI illustrating example control indicators, in accordance with one or more aspects of this disclosure.

[0041] FIG. 8B is an illustrative visual depiction of a hearing threshold map, m accordance with one or more aspects of this disclosure.

[0042] FIG. 8C is an example of a predicted real-ear response, in accordance with one or more aspects of this disclosure.

[0043] FIG. 9 is a sample UI, in accordance with one or more aspects of this disclosure. [0044] FIG. 10 is a flowchart illustrating an example operation in accordance with one or more example techniques described in this disclosure.

[0045] FIG. 11 is a flowchart illustrating an example operation in accordance with one or more example techniques described in this disclosure.

DETAILED DESCRIPTION

[0046] FIG. 1 is a conceptual diagram illustrating an example system 100 that includes hearing instalments 102A, 1G2B, in accordance with one or more techniques of this disclosure. This disclosure may refer to hearing instalments 102A and 102B collectively, as “hearing instruments 102.” A user 104 may wear hearing instalments 102. In some instances, such as when user 104 has unilateral hearing loss, user 104 may wear a single hearing instrument. In other instances, such as when user 104 has bilateral hearing loss, the user may wear two hearing instruments, with one hearing instalment for each ear of the user.

[0047] Hearing instruments 102 may include one or more of various types of devices that are configured to provide auditory stimuli to user 104 and that are designed for wear and/or implantation at on, or near an ear of the user. Hearing instruments 102 may be worn, at least partially, in the ear canal or concha. One or m ore of hearing instruments 102 may include behind-the-ear (BTE) components that are worn behind the ears of user 104. In some examples, hearing instruments 102 include devices that are at least partially implanted into or osseointegrated with the skull of the user. In some examples, one or more of hearing instruments 102 is able to provide auditory stimuli to user 104 via a bone conduction pathway.

[0048] In any of the examples of this disclosure, each of hearing instruments 102 may include a hearing assistance device. Hearing assistance devices include devices that help user 104 perceive sounds in the environment of user 104. Example types of hearing assistance devices may include hearing aid devices, Personal Sound Amplification Products (PSAPs), hearables, healthables, cochlear implant systems (which may include cochlear implant magnets, cochlear implant transducers, and cochlear implant processors), and so on. In some examples, hearing instruments 102 are over-the-counter, direct-to-consumer, or prescription devices. Furthermore, in some examples, hearing instruments 102 include devices that provide auditory stimuli to the user that correspond to artificial sounds or sounds that are not naturally in the user’s environment, such as recorded music, computer-generated sounds, or other types of sounds. For instance, hearing instruments 102 may include so-called “hearables,” earbuds, earphones, or other types of devices. Some types of hearing instruments provide auditory stimuli to the user corresponding to sounds from the user’s environmental and also artificial sounds.

[0049] In some examples, one or more of hearing instruments 102 includes a housing or shell that is designed to be worn in the ear for both aesthetic and functional reasons and encloses the electronic components of the hearing instrument. Such hearing instruments may be referred to as in-the-ear (ITE), in-the-canal (ITC), completely-in-the-canal (CIC), or invisible-in-the-canal (IIC) devices. In some examples, one or more of hearing instruments 102 may be BTE devices, which include a housing worn behind the ear contains all of the electronic components of the hearing instalment, including the receiver (e.g., a speaker). The receiver conducts sound to an earbud inside the ear via an audio tube. In some examples, one or more of hearing instruments 102 may be receiver-in-canal (RIC) hearing -assistance devices, which include a housing worn behind the ear that contains electronic components and a housing worn in the ear canal that contains the receiver. [0050] Hearing instruments 102 m ay implement a variety of features that help user 104 hear better. For example, hearing instruments 102 may amplify the intensity of incoming sound, amplify the intensity of certain frequencies of the incoming sound, or translate or compress frequencies of the incoming sound. In another example, hearing instruments 102 may implement a directional processing mode in which hearing instruments 102 selectively amplify sound originating from a particular direction (e.g., to the front of the user) while potentially fully or partially canceling sound originating from other directions. In other words, a directional processing mode may selectively attenuate off-axis unwanted sounds. The directional processing mode may help users understand conversations occurring in crowds or other noisy environments. In some examples, hearing instruments 102 may use beamforming or directional processing cues to implement or augment directional processing modes.

[0051] In some examples, hearing instruments 102 may reduce noise by canceling out or attenuating certain frequencies. Furthermore, in some examples, hearing instalments 102 may help user 104 enjoy audio media, such as music or sound components of visual media, by outputting sound based on audio data wirelessly transmitted to hearing instalments 102.

[0052] Hearing instruments 102 may be configured to communicate with each other. For instance, in any of the examples of this disclosure, hearing instalments 102 may communicate with each other using one or more wireless communication technologies. Example types of wireless communication technology include Near-Field Magnetic Induction (NFM1) technology, a 900 megahertz (MHz) technology, a BLUETOOTH™ technology, a Wi-Fi™ technology, radio frequency (RF) technology, audible sound signals, ultrasonic communication technology, infrared communication technology, an inducti ve communication technology, or another type of communication that does not rely on wires to transmit signals between devices. In some examples, hearing instruments 102 use a 2.4 GHz frequency band for wireless communication. In examples of this disclosure, hearing instruments 102 may communicate with each o ther via non-wireless communication links, such as via one or more cables, direct electrical contacts, and so on.

[0053] As shown in the example of FIG. 1, system 100 may also include a computing system 108. Computing system 108 includes one or more computing devices, each of which may include one or more processors. For instance, in the example of FIG. 1, computing system 108 includes devices 106A through 106N (collectively.

“devices 106”) Devices 106 may include various types of devices, such as one or more mobile devices, server devices (e.g., wired or wireless remote servers), tablets, personal computer devices, handheld devices, virtual reality (VR) headsets, wireless access points, smart speaker devices, smart televisions, radio devices, medical alarm devices, smart key fobs, smartwatches and other wearable devices, smartphones, intemet-of-things (loT) devices, such as voice-activated network devices, motion or presence sensor devices, smart displays, screen-enhanced smart speakers, wireless routers, wireless communication hubs, prosthetic devices, mobility devices, special- purpose devices, mesh networks, cloud computers, and/or other types of devices. In some examples, devices 106 may include personal computing devices of user 104, such as mobi le phone or smartwatches of user 104. In addition, devices 106 may include computing devices running on-board a vehicle (e.g., car, plane, boat, etc.). For example, hearing instruments 102 may be paired to an audio streaming device that may transm it audio from a vehicle to hearing instruments 102.

[0054] In some examples, multiple devices 106 may be used in con j unction with one another. For example, hearing instrument 102A may be paired to a smart television and a personal mobile phone, while hearing instrument 102B may be paired to a smartwatch. In some examples, each of hearing instruments 102 may communicate data between one another. For example, hearing instrument 102 A may relay data received from the personal mobile phone to hearing instrument 102B and hearing instrument 102B may relay data received from tire smartwatch to hearing instrument 102A. For sake of brevity, not all forms of communication will be described herein.

[0055] In some examples, device 106 may include accessory devices. Accessory devices may include devices that are configured specifically for use with hearing instruments 102. Example types of accessory devices may include charging cases for hearing instruments 102, storage cases forbearing instruments 102, media streamer devices, phone streamer devices, external microphone devices, remote controls for hearing instruments 102, and other types of devices specifically designed for use with hearing instruments 102. Actions described in this disclosure as being performed by computing system 108 may be performed by one or more of the computing devices of computing system 108. One or more of hearing instruments 102 may communicate with computing system 108 using wireless or non-wireless communication links. For instance, hearing instruments 102 may communicate with computing system 108 using any of the example types of communication technologies described elsewhere in this disclosure.

[0056] User 104 may have one or more hearing instalments 102. For example, user 104 may have hearing instrument 102A and 102B to be worn on the right and left ear of user 104. In some examples, user 104 may have multiple hearing instruments 102 for the same ear. For example, hearing instalment 102A may be configured for everyday wear on the right ear of user 104, whereas hearing instalment 102B may be configured for use in the same ear but specialized for a particular activity' (e.g., swimming, listening to music, etc.). As discussed herein, user 104 may configure hearing instruments 102 so that hearing instalments 102 may meet the specific hearing needs of user 104. As disclosed herein, computing system 108 provides user 104 with the ability to customize and configure hearing instruments 102. For example, device 106A may be a mobile phone of user 104. Device 106A may present a UI that user 104 may use during a configuration process for one, both, or several of hearing instruments 102.

[0057] In some examples, the configuration process described herein may be used to determine a profile setting for hearing instalment 102 A and a second profile setting for hearing instalment 102B. The configuration process for each hearing instrument maybe done in parallel or separately. For example, user 104 may configure a right hearing instalment in a first instance of a configuration process and a left hearing instrument in a second instance. Device 106A may transmit a profile setting to hearing instalments 102 once the profile setting is determined through a configuration process as described herein. For example, a first hearing instrument may include a left hearing instrument and a second hearing instalment may include a right hearing instrument. In such examples, devices 106 or hearing instruments 102 may determine a first of one or more settings for the left hearing instrument and determine a second of one or more settings for the right hearing instrument (e.g., right hearing instalment settings).

[0058] FIG. 2 is a block diagram illustrating example components of hearing instalment 200, in accordance with one or more aspects of tins disclosure. Hearing instalment 200 may- be either one of hearing instruments 102. In the example of FIG. 2, hearing instalment 200 includes one or more storage devices 202, one or more communication unit(s) 204, a receiver 206, one or more processor(s) 208, one or more microphone(s) 210, a set of sensors 212, a power source 214, and one or more communication channels 216. Communication channels 216 provide communication between storage devices 202, communication unit(s) 204, receiver 206, processor(s) 208, rnicrophone(s) 210, and sensors 212. Components 202, 204, 206, 208, 210, and 212 may draw electrical power from power source 214.

[0059] In the example of FIG. 2, each of components 202, 204, 206, 208, 210, 212, 214, and 216 are contained within a single housing 218. However, in other examples of this disclosure, components 202, 204, 206, 208, 210, 212, 214, and 216 may be distributed among two or more housings. For instance, in an example where hearing instrument 200 is a RIC device, receiver 206 and one or more of sensors 212 may include in an in-ear housing separate from a BTE housing that contains the remaining components of hearing instrument 200. In such examples, a RIC cable may connect the two housings.

[0060] Furthermore, in the example of FIG. 2, sensors 212 include an inertial measurement unit (IMU) 226 that is configured to generate data regarding the motion of hearing instrument 200. IMU 226 may include a set of sensors. For instance, in the example of FIG 2, IMU 226 includes one or more of accelerometers 228, a gyroscope 230, a magnetometer 232, combinations thereof, and/or other sensors for determining the motion of hearing instrument 200. Furthermore, in the example of FIG. 2, hearing instrument 200 may include one or more additional sensors 236. Additional sensors 236 may include a photoplethysmography (PPG) sensor, blood oximetry sensors, blood pressure sensors, electrocardiograph (EKG) sensors, body temperature sensors, electroencephalography (EEG) sensors, environmental temperature sensors, environmental pressure sensors, environmental humidity sensors, skin galvanic response sensors, and/or other types of sensors. In other examples, hearing instrument 200 and sensors 212 may include more, fewer, or different components. [0061] Storage devices 202 may store data. For example, storage devices 202 may store a hearing threshold mapping 220 that identifies one or more relationships between marker values and hearing thresholds. In one non-limiting example, a marker value of 1 may relate to a specific hearing threshold value, whereas a marker value of 2 may relate to a different hearing threshold value, the mapping delineating those corresponding relationships in a single or as multiple data structures. In other examples, the one or more mapping relationships may, additionally or alternatively, define equations or mathematical links between marker values and hearing thresholds or profile settings themselves, in accordance with one or more of the techniques disclosed herein.

[0062] Storage devices 202 may include volatile memory and may therefore not retain stored contents when powered off. Examples of volatile memories may include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. Storage devices 202 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. Examples of non-volatile memory' configurations may include magnetic hard discs, optical discs, floppy discs, flash memories, read-only memory' (ROM), or forms of electrically programmable memories (EPROM) or electrically erasable and programmable memories (EEPROM).

[0063] Communication unit(s) 204 may enable hearing instrument 200 to send data to and receive data from one or more other devices, such as another hearing instrument, an accessory device, a mobile device, or another types of device. Communication unit(s) 204 may enable hearing instrument 200 using wireless or non-wireless communication technologies. For instance, communication unit(s) 204 enable hearing instrument 200 to communicate using one or more of various types of wireless technology, such as a BLUETOOTH™ technology, third generation (3G) communications, fourth generation (4G) communications, 4G Long Term Evolution (LTE), fifth generation (5G) communications, ZigBec, Wi-Fi™, NFMI, ultrasonic communication, infrared (IR) communication, or another wireless communication technology. In some examples, communication unit(s) 204 may enable hearing instrument 200 to communicate using a cable-based technology, such as a Universal Serial Bus (USB) technology.

[0064] Receiver 206 includes one or more speakers for generating audible sound. Microphone(s) 210 detects incoming sound and generates one or more electrical signals (e.g., an analog or digital electrical signal) representing the incoming sound.

[0065] Processor(s) 208 may be processing circuits configured to perform various activities. In some examples, processors) 208 may process the signal generated by microphone(s) 210 to enhance, amplify, or cancel-out particular channels within the incoming sound. Processor(s) 208 may then cause receiver 206 to generate sound based on the processed signal. In some examples, processor! s) 208 include one or more digital signal processors (DSPs)

[0066] In some examples, processor(s) 208 may cause communication unit(s) 204 to transmit one or more of various types of data. For example, processor(s) 208 may cause communication unit(s) 204 to transmit data to computing system 108. Furthermore, communication unit(s) 204 may receive audio data from computing system 108 and processor(s) 208 may cause receiver 206 to output sound based on the audio data.

[0067] As described herein, computing system 108 may be used to fit, configure and/or customize hearing instrument 200. During a fitting, configuration and/or customization process, receiver 206 rnay generate audible sound at different frequencies for user 104 to listen for during the process. In some examples, user 104 may use a U1 on one of devices 106 to fit, configure, and/or customize hearing instrument 200 in response to the generated audible sound. For example, device 106Amay receive input from user 104 via the Ul. Device 106A may identify hearing thresholds for user 104 with respect to frequency bands using a mapping that links or draws a connection between III input values (e.g., marker values, adjustments to marker values, etc.) and hearing threshold values.

[0068] In some examples, device 106A or hearing instrument 200 may generate a configuration file (e.g., a profile seting) based on input received from user 104. Processor! s) 208 of hearing instrument 200 may receive the configuration file from device 106A (e.g., via communication unit(s) 204). In an example, the configuration file specifies a profile setting specific to user 104 that corresponds to the hearing thresholds of user 104 with respect to predefined frequency bands. Processor(s) 208 may use the profile setting to determine how an audio signal received through microphone(s) 210 should be conditioned based on the hearing thresholds of user 104.

In this way, processors) 208 may fit, configure, and/or customize hearing instrument 200 for user 104. In addition, devices 106 may use the profile settings to condition outgoing audio signals (e.g., a streaming audio signal) that rnay be transmited from one of devices 106 to hearing instrument 200.

[0069] FIG. 3 is a block diagram illustrating example components of computing device 300, in accordance with one or more aspects of this disclosure. FIG. 3 illustrates only one particular example of computing de vice 300, and many other example configurations of computing device 300 exist. Computing device 300 may be a computing device 106A-106N in computing system 108 (FIG. 1).

[0070] As shown in the example of FIG. 3, computing device 300 includes one or more processors) 302, one or more communication unit(s) 304, one or more input device(s) 308, one or more output device(s) 310, a display screen 312, a power source 314, one or more storage device(s) 316, and one or more communication channels 318. Computing device 300 may include other components. For example, computing device 300 may include physical buttons, microphones, speakers, communication ports, and so on

[0071] Communication channel(s) 318 may interconnect each of components 302, 304, 308, 310, 312, and 316 for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channel(s) 318 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. Power source 314 may provide electrical energy to components 302, 304, 308, 310, 312 and 316.

[0072] Storage device(s) 316 may store information required for use during operation of computing device 300. In some examples, storage device(s) 316 have tire primary purpose of being a shortterm and not a long-term computer-readable storage medium. Storage device(s) 316 may be volatile memory and may therefore not retain stored contents when powered off. Storage device(s) 316 may further be configured for long term storage of information as non-volatile memory space and retain information after power on/off cycles. For example, storage device(s) 316 may store information pertaining to user 104, such as profile settings, environment indicators, history logs, etc. and may also store other information, such as Wi-Fi™ passwords, remembered BLUETOOTH™ devices for pairing purposes, and so forth. In a non-limiting example, computing device 300 may be a remote cloud server or a mobile device that stores all or a portion of the information. In some examples, storage devices 202 of hearing instalment 200 may store the same type of information or may store duplicates of information stored by storage device(s) 316. For example, storage device(s) 316 may also store a hearing threshold mapping 326 that identifies one or more relationships between marker values and hearing thresholds. [0073] In some examples, hearing threshold mapping 326 and hearing threshold mapping 220 may or may not be identical, such as where the mappings may need to be synchronized following a routine software update. In some examples, hearing instrument 200 may transmit information to computing de vice 300 and vice versa so that hearing instrument 200 and computing device 300 may share information pertaining to the use of hearing instrument 200 in any setting. For example, hearing instrument 200 may transmit mapping data, marker data (e.g., adjusted marker values), settings data, environment data, etc. to computing device 300. For example, processor(s) 302 may transmit a mapping, such as a hearing threshold mapping, to another device for utilization and/or further processing.

[0074] It is to be understood that multiple computing devices 300 may also transmit information between each other in performing the disclosed techniques. For example, one computing device 300 may be a remote server having the hearing threshold mapping 326 stored thereon, and may transmit information, such as hearing threshold mapping 326 to another computing device 300 or to hearing instrument 200 directly. In some instances, computing device 300 may include a cloud server m which the hearing threshold mapping 326 may be stored on the cloud server network on one or more storage device(s) 316.

[0075] In some examples, processors) 302 on computing device 300 may read and execute instructions stored by storage device(s) 316 or storage devices 202. Similarly, processor(s) 208 of hearing instrument 200 may read and execute instructions stored by storage device(s) 316 or storage devices 202. Processors) 302 may receive and register input via the UI, render updates to the UI (e.g., changes in setting values), and process input data to determine profile settings. For example, processor) s) 302 may generate or otherwise retrieve UI data in the form of computer-readable instructions. The UI data may include instructions that cause proeessor(s) 302 to render a UI, such as one of the Ills described herein, on a display screen 312 or to present the UI via output devices 310. In some examples, processor(s) 208 may render a UI on a display device of hearing instrument 200 (not shown). Processor(s) 302 may also coordinate data transm ission and timing between devices 106 and hearing instruments 102 described herein.

[QQ76] Computing device 300 may include one or more input device(s) 308 that computing device 300 uses to receive user input. Examples of user input include tactile, audio, and video user input. Input device(s) 308 may include presence-sensitive screens, touch-sensitive screens, mice, keyboards, voice responsive systems, microphones or other types of devices for detecting input from a human or machine.

For example, computing device 300 may include a VR headset that receives input from user 104 through gaze detection and tracking. In such examples, computing device 300 may use eye movements, in conjunction with other bodily movements (e.g., hand movements), to receive and process input from user 104 via a VR UI. Similarly, computing device 300 may include an augmented reality (AR) headset, a mixed reality headset, or other wearable device, such as a watch, ring, and so forth

[0077] In some examples, computing device 300 may be coupled to a separate audio headset configured to assist user 104 in self-fitting a hearing instrument. For example, user 104 may use computing device 300 coupled to one or more dummy earpieces (e.g., hearing instrument 200), such as an over-the-ear audio headset used primarily for fitting purposes (e.g., in-store, at a nursing home, at a personal residence, etc.) to allow user 104 to perform a self-fitting process before being in physical possession or otherwise acquiring a personal hearing instrument 200 that user 104 would use on a more permanent basis. In a non-limiting example, user 104 may use a UI on computing device 300 to arrive at a setting with respect to the dummy earpiece that may then be stored to computing device 300 or the dummy earpiece itself.

[0078] In some examples, processors) 302 may provide a UI on display screen 312. Processor(s) 302 may receive input from user 104 via the UI. The received input may cause processor(s) 302 to determine a profile seting for hearing instrument 200. In some examples, processor(s) 302 may aggregate details based on the user input and/or derive patterns of the user input (e.g., through one or more machine learning algorithms or artificial intelligence techniques), before determining a profile setting.

[0079] In some examples, computing device 300 or the dummy earpiece may aid user 104 in determining the profile setting for a personal hearing instrument of user 104 Computing device 300 or the dummy earpiece may determine a profile setting for hearing instrument 200 in accordance with techniques disclosed herein. In some examples, computing device 300 or the dummy earpiece may load the profile seting to the personal hearing instrument of user 104. In a non-limiting example, user 104 may have acquired multiple sets of hearing instruments 102 that need to be configured (e.g., spare sets for the car, for outdoors, etc.). Using the UIs disclosed herein, or similar UIs to those disclosed herein, user 104 may be able to configure each set simultaneously without having to put on any one of the hearing instruments themselves. [0080] In some examples, user 104 may configure each hearing instrument 200 separately depending on the environment in which hearing instrument 200 is to be worn. Processor(s) 302 may store multiple profile settings to storage device(s) 316 for the various environments. In some examples, processor(s) 302 may store profile setings irrespective of the environment in which the profile setting was determined.

[0081] In some examples, hearing instrument 200 may implement multiple setting profiles simultaneously. For example, hearing instrument 200 may be paired with one of devices 106 (e.g., a smart television, radio, vehicle, mobile device), where device 106 is streaming audio to hearing instrument 200 (e.g., via a BLUETOOTH™ connection). In addition, hearing instrument 200 may be receiving other audio (e.g., a nearby conversation) that hearing instrument 200 may condition for user 104 in a different way relative to, for example, audio being streamed from device 106. In such examples, hearing instrument 200 may implement a first profile setting that conditions the streaming audio as it is received from device 106 and a second profile setting that conditions audio received from other sources. In a non-limiting example, hearing instrument 200 may receive instructions from user 104 via computing device 300 to implement a music seting for audio recei ved from a radio device and another seting for conditioning nearby human speech.

[0082] In some examples, computing device 300 or hearing instrument 200 may detect or receive an indication of a change in the environment of user 104. An environment may be location or time-based (e.g., an evening environment, weekend environment, etc.). Computing device 300 or hearing instrument 200 may automatically access a corresponding setting profile to configure hearing instrument 200.

Processor(s) 208 may configure hearing instrument 200 based on the seting profile for use in the environment.

[0083] In some examples, hearing instalment 200 may receive an affirmative request from user 104 to load a particular profile setting. In some examples, hearing instalment 200 may receive the request from computing device 300. For example, computing device 300 may first receive the request from user 104 and relay the request to hearing instalment 200. In another example, computing device 300 may retrieve the profile setting from storage device(s) 316 or from another of storage device(s) 316 of another computing device 300 upon receiving the request, and transmit the profile setting to hearing instrument 200. Upon receiving the profile setting (e.g , via communication unit(s) 204), processor(s) 208 may configure hearing instrument 200 so as to implement the retrieved profile setting.

[0084] In some examples, processors) 302 or proeessor(s) 208 may receive indications from user 104 that computing device 300 and/or hearing instrument 200 have permission to automatically detect an environment of user 104 and automatically load a profile setting to hearing instrument 200 based on the detected environment. For example, hearing instrument 200 may detect certain frequencies that indicate user 104 has entered a particular type of environment (e.g., an outdoor area with a freeway or noisy street nearby, a sports stadium, a restaurant, etc.).

[0085] In some examples, processors) 302 may use location information, such as global satellite navigation system information (e.g., Global Positioning System (GPS)) information or RF signal detection, to automatically detect when user 104 is in a particular area (e.g., near a freeway, nearing a sports stadium, etc.). In such instances, hearing instrument 200 may query computing device 300 and request permission to switch profile settings. In some examples, hearing instrument 200 may switch profile settings automatically without requesting permission.

[0086] Where user 104 has not conducted a fitting operation (e.g., a configuration process) for a newly detected environment, hearing instrument 200 may suggest to user 104 that a new' profile setting may be desirable based on the newly detected environment. In an example, hearing instrument 200 may detect an environment in which no profile setting corresponds to the type of sound detected in the environment (e.g., an outdoor concert, a restaurant with live music, etc.). Hearing instrument 200 may transmit a message that is to appear on computing device 300. The message may suggest to user 104 that user 104 should complete a new' fitting or configuration process based on the newly detected environment. In another example, computing device 300 may detect the new' environment and display the message on display screen 312, rather than receiving the message from hearing instrument 200. The computing device 300 and the hearing instrument 200 may both detect the environment.

[0087] Upon completing the configuration process and determining a profile setting, computing device 300 or hearing instrument 200 may identify the setting as corresponding to the detected environment. For example, computing device 300 or hearing instrument 200 may store the setting as corresponding to a music environment, an indoor environment, a restaurant environment, etc

[0088] Communication unit(s) 304 may enable computing device 300 to send data to and receive data from one or more other computing devices (e.g., via a communications network, such as a local area network or the Internet). For instance, communication unit(s) 304 may be configured to receive source data exported by hearing instrument(s) 102, recei ve comment data generated by user 104 of hearing instrument(s) 102, receive and send request data, receive and send messages, and so on. [0089] In some examples, communication unit(s) 304 may include wireless transmitters and receivers that enable computing device 300 to communicate wirelessly with the other computing devices. For instance, in the example of FIG. 3, communication unit(s) 304 include a radio 506 that enables computing device 300 to communicate wirelessly with other computing devices, such as hearing instruments 102 (FIG. 1). Examples of communication unit(s) 304 may include network interface cards, Ethernet cards, optical transceivers, radio frequency transceivers, or other types of devices that are able to send and receive infonnation. Other examples of such communication units may include BLUETOOTH™, 3G, 4G, 5G, and Wi-Fi™ radios, USB interfaces, etc, Computing device 300 may use communication unit(s) 304 to communicate with one or more hearing instruments (e.g., hearing instrument 102A (FIG. 1, FIG. 2)).

Additionally, computing device 300 may use communication unit(s) 304 to communicate with one or more other remote devices.

[0090] In some examples, communication unit(s) 304 may transmit profile settings from computing device 300 to another computing device 300 (e.g , a remote server) for subsequent access. In another example, communication unit(s) 304 may transmit profile settings from computing device 300 to one or more of hearing instruments 102. In some examples, communication unit(s) 304 may transmit pre-processed data values to other devices 106 or to one or more of hearing instruments 102 (e.g., control indicator values), such that devices 106 or hearing instruments 102 are able to determine the proper profile setting for hearing instruments 102 based at least in part on the pre-processed data values. In some examples, communication unit(s) 304 may transmit pre-processed data to another device as user 104 updates data values on computing device 300. In other examples, communication unit(s) 304 may only transmit the data once user 104 has indicated that user 104 is no longer adjusting hearing instruments 102, for example, by manually indicating as such on a UI.

[0091] Output device(s) 310 may generate output. Examples of output include tactile, audio, and video output. Output device(s) 310 may include presence-sensitive screens, sound cards, video graphics adapter cards, speakers, liquid crystal displays (LCD), or other types of devices for generating output. In some examples, output device(s) 310 may include hologram devices that may project light onto a surface or a medium (e.g., air) to form a holographic UI. For example, output device(s) 310 may project a UI onto a table that user 104 may use similar to how user 104 would use a UI displayed on display screen 312 but with differences in how input is registered, such as by using image capturing methods known m the art.

[0092] Processor(s) 302 may read instructions from storage device(s) 316 and may execute instructions stored by storage device(s) 316. Execution of the instructions by processor(s) 302 may configure or cause computing device 300 to provide at least some of the functionality ascribed in this disclosure to computing device 300. As shown in the example of FIG. 3, storage device(s) 316 include computer-readable instructions associated with operating system 320, application modules 322A-322N (collectively, ‘"application modules 322”), and a companion application 324. In some examples, storage device(s) 316 may store existing profile settings for hearing instrument 200 or newly determined profile settings for hearing instrument 200.

[0093] Execution of instructions associated with operating system 320 may cause computing device 300 to perform various functions to manage hardware resources of computing device 300 and to provide various common sendees for other computer programs. Execution of instructions associated with application modules 322 may cause computing device 300 to pro vide one or more of various applications (e.g., “apps,” operating system applications, etc.). Application modules 322A-322N may cause computing device 300 to provide one or more configuration applications meant to fit, configure, or otherwise customize one or more of hearing instruments 102. Example implementations of such configuration applications are described with respect to FIGS. 4A-11.

[0094] Application modules 322 may provide other applications, such as text messaging (e.g., SMS) applications, instant messaging applications, email applications, social media applications, text composition applications, and so on. Application modules 322 may also store profile setings that may be used to enhance the experience of user 104 with respect to hearing instalments 102

[0095] Execution of instructions associated with companion application 324 by processors) 302 may cause computing de vice 300 to perform one or more of various functions. For example, execution of instructions associated with companion application 324 may cause computing device 300 to configure communication umt(s) 304 to receive data from hearing instruments 102 and use the received data to present data to a user, such as user 104 or a third-party user. In some examples, companion application 324 is an instance of a web application or server application. In some examples, such as examples where computing device 300 is a mobile device or other type of computing de vice, companion application 324 may be a native application. [0096] In some examples, user 104 may launch the configuration application. For example, processor(s) 302 may receive input from user 104 requesting that the configuration application be launched. The configuration application may initiate a configuration process for one of hearing instruments 102 (e.g , a fiting process, customization process, etc ). Instantiation of the configuration process includes implementation of a UI meant to elicit input from user 104 throughout the process. Processor(s) 302 may launch the configuration process by presenting the UI to user 104. For example, processors) 302 may present the UI to user 104 on a viewing window of computing device 300 In some examples, processor(s) 302 may leverage multiple UIs throughout the course of a single configuration process.

[0097] In some examples, processor(s) 302 may render the UI or features thereof on computing device 300. For sake of brevity, processor(s) 302 are described as performing Ul-related rendering tasks. It is to be understood, however, that the actual rendering of any particular UI feature, such as an interactive graphical unit, or the UI itself may be based on UI data only generated by processor(s) 302, whereas display- screen 312 may actually use the UI data to perform the rendering. In other instances, UI data or portions thereof may be generated by another processing system, such as processor(s) 208.

[0098] FIG. 4A is an example UI 400 that may be presented to user 104 in connection with one or more hearing instrument configuration processes (e.g., fitting process, fine- tuning process, hearing test, customization process, etc.). The example UI 400 may assist user 104 in determining an acceptable hearing instrument seting for hearing instrument 200. In some examples, UI 400 may be a graphical user interface (GUI), an interactive user interface, a command line interface, etc.

[0099] In some examples, user 104 may need to configure a newly purchased hearing instrument 200 that is to be worn on a right or left ear of user 104. In some instances, hearing instrument 200 may need to first be powered on and paired to computing device 300 using any suitable pairing mechanism known in the art. User 104 may then provide input via UI 400 as part of the configuration process. User 104 may provide additional or continuous input throughout the configuration process until a satisfactory profile setting has been determined. Processor(s) 302 may use the user input to identify hearing thresholds of user 104. Processors) 302 may use the hearing thresholds to determine a profile setting for hearing instrument 200. In some examples, processor(s) 302 may use the user input, including user manipulations, to determine a profile setting directly.

[0100] In some examples, UI 400 includes control indicators 402A-402N (collectively, "‘control indicators 402”) and markers 406A-406N (collectively, “markers 406”).

Control indicators 402 may include interactive graphical units that may be presented to user 104 via UI 400. In some examples, processors) 302 may render control indicators 402 on computing device 300.

[0101] Each of control indicators 402 may have one or more markers 406A-406N that can be manipulated with respect to control indicators 402. Markers 406 may include interactive graphical units that may be presented to user 104 via UI 400. In some examples, processor(s) 302 may render markers 406 on computing device 300. Processor) s) 302 may render markers 406 as any shape, size, color, etc. For example, processor(s) 302 may render one or more of markers 406 as a dash mark, a square mark, a circular mark, a number mark, a leter mark, a string of letters, a dy namic mark (e.g , an emoji that changes with position, etc.), or any other type of mark. In some examples, processor(s) 302 may render a first mark in con j unction with another mark as one of markers 406. For example, processors) 302 may render a circular mark with a number mark inside, as in FIG. 4A. In another example, processor(s) 302 may render only one mark, such as a number mark, on UI 400.

[0102] Processor(s) 302 may be configured to provide a second UI that identifies a second one of control indicators 402 and a second one of markers 406, in which adjustment to the second one of markers 406 causes processors) 302 to determine a new profile setting or otherwise cause one or more profile settings to change. For example, processor(s) 302 may first provide the second UI identifying at least one additional control indicator 402N and at least one additional marker 406N. The control indicator 402N and marker 406N would be in addition to those control indicators 402 and markers 406 that were presented on the first UI. Processor(s) 302 may then detect an adjustment to the at least one additional marker 406N, in much the same way as processor(s) 302 would detect adjustment of the markers with respect to the first UI. Processor(s) 302 may then update one or more settings of the hearing instalment in response to detecting that adjustment to additional marker 406N.

[0103] In some examples, processor(s) 302 may receive input from user 104 that causes processors) 302 to manipulate the position of markers 406 on UI 400. In some examples, markers 406 may be configured to be slid or dragged along control indicators 402. For example, processor^) 302 may receive an indication of user input to drag or slide markers 406 along the lengths of control indicators 402. In some examples, user 104 may use a scroll wheel or a lever that can be actuated in a given direction to change the posi tion of markers 406. As another example, processor(s) 302 may implement a gaze tracker to determine a position at which user 104 would like to move one of markers 406. A person of skill in the relevant art would understand the various ways in which processor(s) 302 may be programmed to receive input from user 104 regarding the adjustment of markers 406.

[0104] Each of markers 406 may have values that correspond to the particular position of markers 406 and that may change fluidly based on user input. In some examples, processor(s) 302 portray the value of markers 406 as number values that change as markers 406 move along control indicators 402. For example, processor(s) 302 may portray the value of markers 406 as changing in an incremental fashion. In other examples, processor(s) 302 may portray the value of markers 406 as a color changing as processor(s) 302 detect a changing position of markers 406. In some examples, processor(s) 302 may display marker values on another interface separate from UI 400 on which processor(s) 302 are rendering markers 406. In other examples, processors) 302 may not display marker values at all. For example, user 104 may be able to slide the marker using a wearable device, such as a watch, in which case, the screen of the device may not be large enough to show* incrementing marker values. In such instances, computing device 300 may audibly state the current value through a speaker, may provide tactile feedback, or may simply show the position of markers 406 without numbers.

[0105] Processor(s) 302 may receive input from user 104 that causes processor(s) 302 to modify control indicators 402 or markers 406. For example, processors) 302 may change the shape, size and/or scale of control indicators 402. Control indicators 402 may be of various shapes. For example, control indicators 402 may be circular, semi-circular, triangular, rectangular or any other shape that a user, such as user 104 or a third-party user, would be comfortable with during the configuration process in addition, control indicators 402 may be of different lengths. In the example of FIG. 4 A, control indicators 402 are shown as elongated bars of equal length. In some examples, processors) 302 may cause tick marks to appear on control indicators 402 that are meant to assist user 104 in percei ving scale or size of control indicators 402.

[0106] In addition, processor(s) 302 may receive input from user 104 that causes processor(s) 302 to portray markers 406 as changing position. For example, processor(s) 302 may portray markers 406 as moving in a single direction (e.g., up, down, left, right, etc.). In some examples, processor(s) 302 may increment the marker values displayed in connection with a position of markers 406 as the position of markers 406 changes. In some examples, processors) 302 may receive input from user 104 specifying a value for markers 406. For example, user 104 may input a number value manually in a text box. In such instances, user 104 may then adjust the number using arrow keys. For example, processor(s) 302 may receive user input indicating that user 104 desires marker 406A to move upward. Processor(s) 302 may cause marker 406A to move to an updated position based on the user input.

[0107] In an illustrative example, processor(s) 302 may receive input from user 104 specifying that marker 406N should be placed at a number value of 4 for the mid-range frequency band (e.g., as shown in FIG. 5 A). Next, processor(s) 302 may receive input from user 104 indicating that marker 406N should be adjusted from a value of 4 to a value of 5 (e.g., with an up-arrow key). Processor(s) 302 may cause the position of marker 406N to change based on the user input.

[0108] In some examples, markers 406 may have a displayed value and a non-disp!ayed value, such as a metadata value. In some instances, the displayed value may not correspond directly to the non-displayed value, such as when an adjusted scale is used as described herein. Processors) 302 may cause either value to change based on the user input.

[0109] In some examples, processor(s) 302 may overlay control indicators 402 on top of one another, rather than being horizontally staggered as shown in FIG. 4A. For example, in an example where a VR space is used to present control indicators 402 to user 104, processors) 302 may present control indicators 402 in a three-dimensional space where user 104 may navigate through the space to access control indicators 402. In another example, processor(s) 302 may present the control indicators 402 to user 104 through different pages on a III.

[0110] In some examples, processor! s) 302 may render a first UI that presents control indicator 402A without presenting control indicators 402B or 402C (e.g., a first page). Processors) 302 may receive input from user 104 that causes processor(s) 302 to advance from the first UI to a second UI (e.g., a second page). The second UI may- present control indicator 402B without presenting control indicators 402A or 402C. In this way, a user 104 may perceive this type of navigation as advancing through pages of a book. In some examples, processor(s) 302 may automatically advance from one UI to another UI (e.g., another UI page).

[0111] In some examples, processor(s) 302 may receive input from user 104 that indicates a desire to advance to a new page where processor(s) 302 may present one of control indicators 402 in isolation on each page or with less than the fill! number of control indicators 402 shown on the page. In such examples, processors) 302 may help user 104 understand which control indicators 402 still need to be manipulated and which have already been set.

[0112] In some examples, help indicators or clues rnay be presented to user 104 to guide user 104 through the configuration process. For example, processor(s) 302 may highlight or cause one of control indicators 402 to blink a particular color. In some examples, processor(s) 302 may cause aspects of control indicators 402, such as markers 406, to indicate to user 104 that a particular action needs to be taken. In some examples, a pointer, such as an arrow, may appear that tells user 104 whether a marker should be moved upward or downward. For example, processor^) 302 may receive information from user 104 indicating that user 104 is unable to perceive a particular incoming sound. Accordingly, a portion of control indicators 402 (e.g., an estimated range) may appear highlighted or processor! s) 302 may present an arrow pointing in a direction suggesting to user 104 that particular adjustments may be necessary.

[0113] In the example of FIG. 4A, UI 400 shows three control indicators 402. Control indicators 402 correspond to frequency bands. In tins way, the output of hearing instrument 200 may be varied across the frequency bands, including frequency regions within and adjacent to each frequency band. In an illustrative and non-limiting example, control indicator 402A corresponds to a low frequency band 404A, control indicator 402B corresponds to a middle frequency band 404B, and control indicator 402N corresponds to a high frequency band 404N. In an example, the low frequency band rnay be 250-750 Hertz (Hz), the middle frequency band may be 1000-2000 Hz, and the high frequency range may be 3000-6000 Hz.

[0114] In some examples, processors) 302 may be configured to select frequency bands to encompass various frequencies. In an example, the frequency bands may be selected to encompass any combination of frequency ranges. In addition, processor(s) 302 may display the frequency band on UI 400 as a band label . In some examples, the frequency bands may be selected so as to encompass the full range of frequency values without gaps. In some examples, the number of control indicators 402 may be more or less than the three shown. For example, the full frequency range may be 250-6000 Hz, where the low frequency band may be 250-750 Hz, a first middle frequency band may be 750-1000 Hz, a second middle frequency band may be 1000-2000 Hz, and the high frequency band may be 2000-6000 Hz.

[0115] Processor(s) 302 may display the ranges for each band as labels on UI 400. However, even in instances where a gap separates the ranges for two bands, adjustments to markers 406 (that correspond to the bands) are mapping to hearing thresholds at discrete audiometric frequencies, which are typically measured at 250 Hz, 500 Hz, 750 Hz, 1000 Hz, 1500 Hz, 2000 Hz, 3000 Hz, 4000 Hz, 6000 Hz, and 8000 Hz. It should be understood, however, that higher frequencies (up to 20,000 Hz), lower frequencies (down to 20 Hz) or intermediate frequencies (e.g., 5000 Hz) could additionally or alternatively be represented.

[0116] In some examples, proeessor(s) 302 may use the mapping to identify hearing thresholds at discrete audiometric frequencies that are within and adjacent to the range of values that correspond to a frequency band label. In such examples, processors) 302 may display frequency band labels as a guide for user 104. The identified hearing thresholds serve as input to a fitting formula, which prescribes sound parameters (e.g., gain, compression, etc.) across the entire frequency region (without gaps).

Further, even in instances where there is a gap between the frequencies represented by the bands, hearing thresholds for this region may be estimated using interpolation, and higher or lower thresholds could be estimated using extrapolation.

| 0117] Any number of control indicators 402 may be used for any number of frequency bands. In some instances, the number of control indicators 402 may be limited to the number of channels available in hearing instalment 200. The frequency regions or bands may be delimited as frequencies of sounds that a user is likely to encounter in a particular environment. For example, the various speech sounds for conversation tend to fall within a frequency range of approximately 250 Hz to approximately 7500 Hz-8000 Hz.

[0118] In some examples, processor(s) 302 may adjust (e.g., increase or decrease) the number of control indicators 402 based on input received from user 104. For example, user 104 may indicate that the sound within a particular frequency band is not satisfactory' no matter what adjustments user 104 makes in that frequency band. Processor) s) 302 may automatically divide the frequency band into two or more separate bands to provide more tailored adjustments through use of additional control indicators 402.

[0119] In some examples, processor(s) 302 may automatically divide one or more frequency bands while user 104 is conducting the configuration process (e.g., in real-time for user 104). In some examples, processor(s) 302 may receive input from user 104 at an initial information -gathering screen (e.g., an initial home screen). Processor) s) 302 may receive input such as information related to a particular frequency or frequency band that user 104 believes present more problems for user 104 than others. In such examples, processor(s) 302 may automatically divide one or more frequency bands or add individual frequencies prior to or at the commencement of the configuration process. Processor(s) 302 may provide user 104 with as few as one or two control indicators 402 to as many as 20 or more.

[0120] In some examples, proeessor(s) 302 may- link the frequency bands. For example, control indicator 402A may correspond to a frequency band of 250-8000 FIz, control indicator 402B may correspond to a frequency band of 250-1750 Hz, and control indicator 402N may correspond to a frequency band of 1750-8000 Hz. In an illustrative example, processor(s) 302 may register user input to adjust marker 406A to a value of 2. In an example where a marker value of zero corresponds to a hearing threshold of 20-decibels (dB) hearing level (HL), the step-size is 10-dB, and no interpolation or extrapolation is used, processor(s) 302 may identify hearing thresholds of 40-dB HL for the entire frequency spectrum. Processor(s) 302 may then register user input to adjust marker 406B to a value of 1, in which case, the hearing thresholds for 250-1500 Hz would drop to 30-dB. Further, when processors) 302 register user input to adjust marker 406N to a value of 3, the hearing thresholds for 2000-8000 Hz would increase to 50-dB HL. A benefit of linking the frequency bands in this way would be to increase speed of the configuration process. Tliis would be especially advantageous where user 104 had a fair amount of hearing loss at all frequencies, where the step-size of the changes were small, or where processor(s) 302 rendered a high number of control indicators 402.

[0121] In another example, control indicator 402A may correspond to a frequency band of 250-8000 Hz, control indicator 402B may correspond to a frequency band of 250-1000 Hz, and control indicator 402N may correspond to a frequency band of 3000-8000 Hz. Processors) 302 may register user input to adjust marker 406A to a value of 2. In an example where a marker value of zero corresponds to a hearing threshold of 20-dB HL, and the step-size is 10-dB, proeessor(s) 302 may identify hearing thresholds of 40-dB HL for the entire frequency spectrum. Processor(s) 302 may then register user input to adjust marker 406B to a value of 1, in which case, the hearing thresholds for 250-1000 Hz would drop to 30-dB. Further, where processor(s) 302 then register user input to adjust marker 406N to a value of 3, the hearing thresholds for 3000-8000 Hz would increase to 50-dB HL. However, control indicator 402A may identify hearing thresholds within the gap between frequency bands that correspond to control indicators 402B and 402N. That is, control indicator 402A would identify hearing thresholds for frequencies 1500 and 2000 Hz, and thus, the hearing thresholds in the gapped region would remain at 40-dB HL because marker 406A was adjusted to have a value of 2.

[0122] In some examples, proeessor(s) 302 may implement one of control indicators 402 as a broadband controller and implement another one of control indicators 402 as a high- or a low- frequency controller and achieve the same functionality that processor(s) 302 could by implementing three mutually exclusive control indicators 402 In some instances, processors) 302 may determine a configuration for control indicators 402 based on preferences received from user 104 [0123] Control indicators 402 may correspond to key audiometric frequency bands for a hearing instalments 102 fitting, with some tolerance around that frequency (e.g , 480-520, 990-1010, 1990-2010, and 3990-4010 Hz) In some examples, control indicators 402 may correspond to frequencies that are typically tested throughout die duration of a hearing test. For example, control indicators may be associated with octave frequencies 250-8000 Hz, interoctave frequencies 750-6000 Hz, or extended high frequencies 8000-20,000 Hz. In some examples, control indicators 402 may correspond to very low frequencies (e.g., 20-250 Hz). It should be noted that control indicators 402 may correspond to frequency bands that have a different number of individual frequencies that make up the band compared to other frequency bands. In some instances, it may be desirable to provide a greater number of control indicators 402 corresponding to the high frequency ranges where hearing loss is more prevalent. Accordingly, the number of control indicators 402 may be greater for higher frequency bands as compared to middle or lower frequency bands.

[0124] In some examples, individual control indicators 402 may correspond to indi vidual frequencies rather than groups of frequencies. For example, control indicator 402A may correspond to 500 Hz. In some examples, individual frequencies and frequency bands may include approximations or error margins for corresponding values. For example, control indicator 402A may correspond to 500 Hz with an error margin of 1 Hz, 5 Hz, 10 Hz, 15 Hz, etc.

[0125] In FIG. 4A, the marker values are shown as being set to all zeros. The position at all zeros may correspond to normal hearing for user 104. Normal hearing generally refers to hearing thresholds that are 0-dB HL to 20-dB HL at a given frequency, but in practical terms, normal hearing generally refers to 20-dB HL, where a normal range is 0-dB HL to 20-dB HL. A hearing threshold generally refers to the minimum decibel level at which user 104 can perceive the sound. Hearing thresholds may vary across the frequency spectrum. For example, a hearing threshold of user 104 may be higher for sounds above a certain frequency, whereas the hearing threshold of user 104 m ay be in the normal range for sounds below a certain frequency.

[0126] In the example of FIG. 4A, user 104 may adjust the position of markers 4G6A-N during a configuration process for one or more of hearing instruments 102. In some examples, user 104 may listen for a sound through hearing instrument 200 that is being configured. Processor(s) 302 may prompt user 104 to adjust one of markers 406 for a particular frequency depending on the ability of user 104 to perceive the sound.

[0127] FIG. 4B is an illustrative visual depiction of a hearing threshold mapping 420.

In the example of FIG. 4B, hearing threshold m apping 420 is illustrating the mapping of marker values to hearing thresholds with respect to the position of markers 406 in FIG. 4A (e.g., preset to a default of all zeros). The y-axis in FIG. 4B refers to ‘level (dB HI)”, which corresponds to hearing threshold level as expressed in terms of dB HL In some examples, dB HL maps to dB SPL.

[0128] In an example where a marker value of zero maps to a normal hearing threshold of 20-dB HL at each frequency band, the decibel level would map to 20-dB HL across the entire frequency spectrum for all data points. The data points shown may correspond to three audiometric frequencies within each of three frequency bands 404 for a total of nine explicit data points. A person of skill in the art will understand that hearing threshold mapping 420 is only a visual representation of a hearing threshold mapping. Hearing threshold mapping 420 may manifest as a background software application performing a hearing threshold mapping algorithm that converts marker values to hearing thresholds such that a graph as shown in FIG. 4B could be generated based on the output from the algorithm.

[0129] FIG. 4C is an example of a predicted real-ear response 430 that corresponds to the hearing thresholds identified for the marker positions in FIG. 4A. FIG. 4C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 4B using a standardized or proprietary fitting formula. FIG. 4C provides an example where changes to the UI result in bilateral adjustments. However, a person of skill in the art will understand that the changes/fitting of hearing instalm ents 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102). In addition, a person of skill in the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used. In the illustrative example of FIG. 4C, the predicted real ear responses are shown with respect to soft, moderate, and loud sounds (e.g., 50, 65, and 80 dB SPL input levels, although one or more processors may use other levels, as well). [0130] FIG 5A depicts the UI of FIG. 4A, where adjustments have been made to the positions of markers 406A-406N. In the example of FIG. 5A, marker 406B has been adjusted to a level 3, whereas marker 406N has been adjusted to a level 4. The positions, or levels at which markers 406 are placed, are used to determine profile settings for hearing instrument 200. For example, the placement of markers 406 in FIG. 5A corresponds to one or more profile settings for hearing instrument 200. In another example, marker 406N is positioned at level 5, rather than level 4. Each position corresponds to a different seting for hearing instrument 200.

[0131] In some examples, storage device(s) 316 may store a mapping that links marker values 406 to hearing thresholds. For example, the mapping may include an algorithm that calculates a conversion between values of markers 406 to hearing thresholds with respect to frequencies or frequency bands. Processor(s) 302 may use the hearing thresholds to determine a hearing instrument setting (e.g., a profile setting or set of one or more profile settings). Processor(s) 302 may then transmit tire one or more settings to hearing instrument 200 to fit hearing instrument 200.

[0132] In some examples, processor(s) 208 may use the hearing thresholds to fit hearing instrument 200 using a standardized or proprietary fitting formula. For example, tire fiting formula may be a National Acoustic Laboratories (NAL) fitting formula (e.g., NAL-NL2, NAL-NLl), a Desired Sensation Level (DSL) method (e.g., DSL i/o v5), an e-STAT fitting formula, or any other formula known in the art. The hearing thresholds are determined by referring to the mapping stored in memory. In some examples, the mapping allows the marker values to be translated directly into a configuration setting based on the hearing thresholds referenced in the mapping file. [0133] In some examples, processor(s) 302 store the mapping as a reference file on storage device(s) 316 of computing device 300. In other examples, the mapping may be stored on storage device 202 of hearing instrument 200. As such, either processors) 208 or processor(s) 302, or both, may store the mapping that identifies one or more relationships between marker values and hearing thresholds.

[0134] The mapping may provide a conversion of input values to hearing thresholds. In the example of FIG. 5 A, user 104 may increase markers 406. In such examples, the hearing thresholds m the frequency band would increase as processors) 302 detect user input intended to cause an increase m marker value. In some examples, the hearing threshold in a particular frequency band may increase by 1-dB, 2-dB, 3-dB, 5-dB, or IQ-dB

[0135] In some examples, the mapping provides a conversion of marker values directly to hearing instrument 200 settings. The mapping may take the form of a look-up table, a graph, or another transformation system. For example, marker values may be adjusted as shown in FIG. 5 A, with those adjusted values being provided as input to a mapping algorithm. The mapping algorithm may then determine what setting corresponds to the adjusted marker values. In some examples, a single seting may be determined based on the adjusted marker values for each control indicator. For example, a first setting may ¬ be determined for the first frequency band that corresponds to a marker level of 0, a second setting may be determined for the second frequency band that corresponds to a marker level of 3, and a third setting may be determined for the first frequency band that corresponds to a marker level of 4.

[0136] In an illustrative example, the positions of markers 406 may begin at a default of 0 as in FIG. 4A. Each change in position of markers 406 (sliding upward along control indicators 402) may correspond to a specific threshold increase (e.g., 1-dB,

2-dB, 3-dB, 5-dB, 10-dB, etc.). For example, an initial marker value may be zero, where an adjusted marker value of one corresponds to a X-dB change in hearing threshold, where “X” corresponds to a predetermined decibel value (e.g., 1, 2, 3, 5, 10, or any other decibel level).

[0137] FIG. 5B provides an illustrative visual depiction of a hearing threshold mapping 520. In an example where each incremental change (i.e., ‘ X”) corresponds to a 1 Q-dB increase and where a normal hearing threshold corresponds to 20-dB HL, the marker positions of FIG. 5 A would map to hearing thresholds of 20-dB HL at the low frequency band (e.g., 250 Hz, 500 Hz, and 750 Hz), 50-dB HL (20+3(10)) at the middle frequency band (e.g., 1,000 Hz and 2,000 Hz), and 60-dB HL (20+4(10)) at the high frequency band (e.g., 3,000 Hz; 4,000 Hz; and 6,000 Hz), as in FIG. 5B. In addition, a person of skill in the art would understand that processors) 302 may be configured to provide the ability to adjust left and right ear profile settings independently of one another.

[0138] In some examples, a change in marker position, and thus, hearing threshold, may result in the change in a sound parameter (e.g., gain of hearing instrument 200) to increase or decrease in a linear or non-linear fashion (e.g., logarithmic or an inconsistent interval). For example, where a change in position of marker 406N from marker values 4 to 5 maps to a 10-dB increase in hearing threshold, the gain of the hearing instrument rnay be increased by “x” dB in that frequency range for all input levels, or by “x” dB for soft input levels and some number less than ‘ x ’ for moderate and/or loud input levels. Non-linearity may be based on the way in which hearing thresholds vary across frequency bands.

[0139] FIG. 5C is an example of a predicted real-ear response 530 that corresponds to the hearing thresholds identified for the marker positions in FIG. 5A. FIG. 5C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 5B using a standardized or proprietary fitting formula. FIG. 5C provides an example where changes to the UI result in bilateral adjustments. However, a person of skill in the art will understand that the changes/fitting of hearing instruments 102 may he done separately for the left and right ears (e.g., left and right hearing instruments 102). In addition, a person of skill in the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.

[0140] Each frequency band may have a center value. For example, the middle frequency band described above may have a center value of 1,500 Hz (halfway between 1 ,000 Hz and 2,000 Hz). Therefore, the 50-dB HL value may only correspond to tire center value of 1,500 Hz. The mapping for higher and lower frequencies may be an interpolation or extrapolation of hearing diresholds for surrounding frequency data points. For example, the threshold for 2,000 Hz may be a few dB higher than 50-dB, but less than 60-dB (the hearing threshold at the high frequency band in FIG. 5 A). Likewise, the threshold for 1,000 Hz would be less than 50-dB but greater than 20-dB (the hearing level at the low frequency band in FIG. 5 A). In some examples, processor(s) 302 may estimate or identify the same hearing thresholds for both ears based on adjustments to markers 406. However, adjustments could be made separately for the left and right ears, which would result in ear-specific hearing instrument programming. In such examples, processor(s) 302 may implement a first profile setting for the left ear of user 104 and a second profile setting for the righ t ear of user 104 that is different than the first profile setting. [0141] FIG. 5D provides an illustrative visual depiction of a hearing threshold mapping 540. Hearing threshold mapping 540 illustrates how processor(s) 302 may interpolate and extrapolate hearing thresholds for data points. Although shown as interpolating and extrapolating with respect to band center frequencies, the interpolation and extrapolation may he done with respect to other frequencies that are not the center of a frequency band. For example, the mapping algorithm may select a key audiometric frequency that is not the band center frequency to use as a reference frequency. In any event, processor(s) 302 may estimate, from the mapping, a hearing threshold data point from one or more other hearing threshold data points to identify the hearing threshold. Any number of data points may be interpolated or extrapolated with respect to the reference frequency so as to identify an accurate set of hearing thresholds. The set of hearing thresholds may then serve as input to a standardized or proprietary fiting formula in order to determine a set of one or more profile settings.

[0142] FIG. 5E is an example of a predicted real-ear response 550 that corresponds to the hearing thresholds identified for the marker positions in FIG. 4A, as well as interpolated and extrapolated hearing thresholds as shown in FIG . 3D. FIG. 5E shows the predicted real ear response for hearing instruments 102 that have been best-fit to fiting targets based on the hearing thresholds of FIG. 5D using a standardized or proprietary ' fitting formula. FIG. 5E provides an example where changes to the UI result in bilateral adjustments. However, a person of skill in the art will understand that the changes/fitting of hearing instruments 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102). In addition, a person of skill in the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.

[0143] In some examples, processors) 302 may detect that user 104 has moved one of markers 406 to the outer limits of one of control indicators 402 (e.g., the top of control indicators 402). In such examples, processor(s) 302 may trigger a message to be displayed on display screen 312. The message may include a notification that user 104 has reached the limits of the hearing instalment configuration process or of hearing instruments 102 and that user 104 should consider visiting an audiologist/hearing healthcare specialist to acquire a more powerful hearing instalment. Processors) 302 may provide user 104 with the option of finding a nearby audiologist or hearing instrument specialist. When processor(s) 302 detect an affirmative reply from user 104 to explore options, processor(s) 302 may then access a computer network and use location services to identify contact information for local providers.

[0144] In some examples, processor(s) 302 may set the position of markers 406 to a default starting value, such as all zeros, when initially rendering UI 400 In other examples, processors) 302 may determine the starting positions based on preset profile settings. For example, processor(s) 302 may provide a UI with a selection of default profile settings that user 104 may browse through and test out by listening to sound conditioned by each default profile setting. User 104 may select one or more default profile settings and may listen to audio conditioned by the default profile settings prior to making such selections. Processor(s) 302 may detect selection of one or more default profile settings (e.g., 3 or 4 default profile settings) from user 104. Based on the selected default profile setting(s), processor(s) 302 may render UI 400 with markers 406 preset at default starting values that correspond to the default profile settmg(s) selected. For example, processor(s) 302 may reference the hearing threshold mapping to determine the starting values of markers 406 based on the default profile setting(s), similar to how processor(s) 302 would reference the mapping in the other direction to convert adjusted marker values to one or more profile settings. Further, processor(s) 302 may determine different starting positions for the left and right ear and/or for different detected environments.

[0145] FIG. 6A illustrates UI 400 having a default starting point that corresponds to a default profile setting. Processor(s) 302 may receive the default setting from user 104 or from the manufacturer of hearing instruments 102. In some instances, processor(s) 208 may receive the default setting from the manufacturer and store the default setting to storage device(s) 202. In another example, processors) 302 may recommend the default configuration based on input from user 104 (e.g., based on responses to a questionnaire from user 104) In examples involving a default setting,

UI 400 may display markers 406 having values of all zero. However, values of all zeros may not correspond to the same hearing thresholds as discussed in FIG. 4A, even though FIG. 4A also showed all zeros. All zeros in this example would correspond to processor(s) 302 not detecting a change to the default setting. The default setting may correspond to any default setting or any hearing threshold mapping. For sake of illustration, FIG. 6B shows an example v isual depiction of a hearing threshold mapping 620 for a default seting, where markers 406 having values of all zeros correspond to the hearing thresholds as shown in FIG 6B. It should be noted that the hearing thresholds are markedly different than is shown in FIG. 4B, even though the marker positions are set to zero in both examples.

[0146] FIG. 6C is an example of a predicted real-ear response 630 that corresponds to the hearing thresholds identified for the marker positions in FIG. 6A. FIG. 6C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fiting targets based on the hearing thresholds of FIG. 6B using a standardized or proprietary fitting formula. FIG. 6C provides an example where changes to the UI result in bilateral adjustments. Ho-wever, a person of skill in the art will understand that the change s/fittmg of hearing instalments 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102). In addition, a person of skill in the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.

[0147] FIG. 7A provides an example of marker adjustments with respect to a default setting. Adjustments to markers 406 represent offsets to a default setting. In this example, the default setting is the same as discussed in connection to FIGS. 6A-6C. [0148] In the example of FIG. 7 A, the hearing thresholds of the center frequencies of the bands are changed by 10-dB with each stepped change in marker value. In an illustrative example, in the low frequency band, an increase of one step corresponds to a 10-dB increase (worsening) in hearing threshold (from 15 to 25-dB HL). In tire high frequency band, a decrease of one step corresponds to a 10-dB decrease (improvement) in hearing threshold (from 45 to 35-dB HL). In this illustrative example, thresholds at non-center frequencies are either interpolated or extrapolated; however, in some examples, processor(s) 302 may adjust all hearing thresholds within a band by the same amount. A visual depiction of the hearing threshold mapping for FIG. 7A is shown as mapping 720 in FIG. 7B.

[0149] FIG. 7C is an example of a predicted real-ear response 730 that corresponds to the hearing thresholds identified for the marker positions in FIG. 7A. FIG. 7C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 7B using a standardized or proprietary fitting formula. FIG. 7C provides an example where changes to the UI result in bilateral adjustments. However, a person of skill in the art will understand that the change s/fitting of hearing instruments 102 may be done separately for the left and right ears (e.g., left and right hearing instruments 102). In addition, a person of skill in the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.

[0150] FIG. 8A provides an illustrative example of an example UI 400. The example UI 400 illustrates that any number of control indicators 402A-402N may be used, including two control indicators 402, as shown in FIG. 8A. In some examples, processor(s) 302 may display only one control indicator that corresponds to a frequency band of width encompassing low, mid, and/or high frequency bands. For example, one control indicator may have a bandwidth that encompasses both mid and high frequency bands, low and mid frequency bands, low and high frequency bands, or low, mid and high frequency bands. In the example of FIG. 8A, control indicators 402 have markers 406A-N and correspond to a low frequency band 404A and a high frequency- band 404N, respectively.

[0151] FIG. 8B provides an illustrative visual depiction of a hearing threshold mapping 820. In an example where each incremental change (i.e., “X”) corresponds to a 10-dB increase and where a normal hearing threshold corresponds to 20-dB HL, the marker positions of FIG. 8A would map to hearing thresholds of 40-dB HL at the first frequency band 404A and a 60-dB HL for the second frequency band 404N. In the example hearing threshold mapping 820 of FIG. 8B, processor(s) 302 have interpolated and extrapolated a set of hearing thresholds on either side of the 40-dB HL identified for 750 Hz and the 60-dB HL identified for 4000 Hz.

[0152] FIG. 8C is an example of a predicted real-ear response 830 that corresponds to the hearing thresholds identified for the marker positions in FIG. 8A. FIG. 8C shows the predicted real ear response for hearing instruments 102 that have been best-fit to fitting targets based on the hearing thresholds of FIG. 8B using a standardized or proprietary fitting formula. FIG. 8C provides an example where changes to the UI result in bilateral adjustments. However, a person of skill in the art will understand that the change s/fitting of hearing instruments 102 may be done separately for the left and right ears (e.g., left and right hearing instalments 102). In addition, a person of skill m the art will understand that an alternative number of control indicators 402, as well as different frequency band ranges, hearing threshold step-sizes, and/or fitting formulas, may be used.

[0153] FIG. 9 provides another illustrative example of an example UI for facilitating a hearing instalment configuration process. The example UI 910 of FIG. 9 may take the form of an equalizer line 912 that can have markers 914A-914N slid or dragged along the length of line 912. For example, FIG. 9 illustrates a y-axis that corresponds to a decibel change to the hearing thresholds and an x-axis that corresponds to frequency. User 104 may slide markers 914 along the length of line 912 to identify a hearing threshold for a particular frequency band. Hearing thresholds for certain frequencies (e.g., frequencies that are needed to implement a fitting formula) would be extracted from the position of markers 914. In the example of FIG. 9, the y-axis may correspond to an offset to a hearing threshold, which would be measured in units of dB. For example, units of dB may be used where processors(s) 302 first determine a preset setting, in which UI 910 provides the ability to determine an offset to a hearing threshold as defined by the preset. In another examples, the y-axis could correspond directly to hearing threshold, in which case the mapping would be in units of dB HL. [0154] In some examples, processor(s) 302 may cause markers 914A-914N to appear on or near equalizer line 412. In some examples, processor(s) 302 may cause markers 914A-914N to appear in predetermined positions along equalizer line 912 to correspond to predetermined frequency bands. In one example, marker 914C may correspond to a frequency band 250-750 Hz, similar to the example in which control indicator 402A would correspond to a low frequency band in the illustrative example of FIG 4A. UI 410 may provide more or fewer markers 914 than are shown in FIG. 9. In addition, UI 410 may provide markers 914 at any position along equalizer line 912. Processor! s) 302 may determine the optimal number of markers 914 and optimal positions for markers 914 along equalizer line 912, for example, under various configuration circumstances as described herein (e.g., left ear, right ear, for detected environments, etc.).

[0155] In some examples, processor(s) 302 may only generate markers 914 in response to detecting user input. For example, UI 910 may present equalizer line 912 without any markers 914 Processor(s) 302 may detect that user 104 has touched UI 410 at a position along equalizer line 912. In response to detecting the touch input, processor(s) 302 may cause one of markers 914 to appear at the touched position. In other examples, processor(s) 302 may not cause a marker to appear in any instance but may instead allow user 104 to manipulate equalizer line 912 without any markers (markers 914 or otherwise). The same may be said for UI 400 or UI 800 with respect to markers 406 and 806.

[0156] Processor(s) 302 may detect user 104 sliding markers 914 horizontally along equalizer line 912 (e.g., to a new position along the x-axis). Processor(s) 302 may detect user 104 moving one of markers 914 upw¾rd or downward in the vertical direction. In addition, processors) 302 may detect more than one touch input via UI 400, UI 800, or UI 410 (e.g , more than one finger). For example, processor(s) 302 may track adjustments made simultaneously to multiple markers 406 (or markers 914) and in turn, simultaneously identify hearing thresholds and determine profile settings in response to the tracked adjustments. In another example, processor(s) 302 may utilize multiple touch inputs to change the size of the frequency band. For example, processor(s) 302 may detect more than one touch input via UI 400 or UI 910 and detect the touch input moving in toward one another or outward away from the other touch input, such as a pinching gesture. In such examples, processors) 302 may adjust the width of the frequency band to encompass a broader or narrower frequency range.

[0157] With respect to FIG. 9, processor(s) 302 may detect user 104 moving marker 914A upward from 0-dB to 20-dB at a frequency band that corresponds to marker 914.4. The frequency band that corresponds to marker 914A may be established based on the position of marker 914A. In some cases, the frequency band may be established relative to the position of other markers 914 along equalizer line 912.

[0158] In response to receiving touch input from user 104 requesting movement of marker 914A, processor(s) 302 may cause marker 914A to move in the same direction as the touch input (e.g., left, upward, right, downward, etc.). In some examples, UI 910 may also cause other parts of equali zer line 912 to move proportional to marker 914A and/or the position of other markers. For example, processor(s) 302 may cause areas of equalizer line 912 that are to the left and right of marker 914A to move upward as shown. In the example of FIG. 9, processor(s) 302 may cause marker 914D to move upward along with equalizer hue 912, where the frequency band associated with marker 914A is of wide enough breadth such that a part of equalizer line 912 that corresponds to marker 914D moves upward as a result of moving marker 914A upward. User 104 may then desire to adjust marker 914D, in which case processor(s) 302 may cause marker 9141) and equalizer line 412 to move in the direction user 104 requests. For example, where user 104 desires to then move marker 914D downward, processor(s) 302 may cause marker 914D to move downward while maintaining marker 414A at 20-dB. Processor(s) 302 may dynamically adjust equalizer line 912 to maintain fluidity between markers 914 as the position of markers 914 changes.

[0159] In some examples, the frequency mapping may be at linear, logarithmic, or inconsistent intervals. In the example of FIG. 9, UI 910 is displaying the x-axis on a logarithmic scale. Processors) 302 may do the same for the control indicators of FIG 4A. For example, control indicator 402A may correspond to a frequency band that includes frequency regions expressed on a logarithmic scale. This may be true even when UI 400 does not visually depict the frequency band range, as they are visually depicted in UI 910 with a continuous equalizer line 912 that extends horizontally across the entire graph. For example, UI 400 may depict control indicators 402 and markers 406 without visually depicting the spectrum of individual frequencies within a frequency band, where proeessor(s) 302 may take into account the frequencies incrementing on a logarithmic, linear, or other scale, when identifying hearing thresholds with respect to the frequency band or regions within and adjacent to the frequency band.

[0160] Processor(s) 302 may use a fitting formula to determine one or more settings for hearing instruments 102 based on hearing thresholds determined for user 104 with respect to the frequency bands. Processor(s) 302 may individually store the settings in a storage device 316, such as one of application modules 322 or companion application 324. Processor(s) 302 may combine the settings into a single combined setting at a later time, such as when hearing instruments 102 are ready to receive the configuration file (e.g, the seting information). In other examples, the mapping provides a single setting based on a combination of values for markers 406. For example, the mapping may have a single seting that corresponds to the marker levels for each of control indicators 402 provided with UI 400.

[0161] In some examples, hearing instalment 200 will modify the way that sound is being processed with each frequency band based on which of markers 406 are being manipulated. For example, processor! s) 302 may detect movement of marker 406A and transmit an instruction to hearing instrument 200 to increase or decrease the low- frequency gain that corresponds to the adjusted position of marker 406A. In some examples, proeessor(s) 302 may continuously change the way in which the Sow- frequency sound is processed based on the detected movements of marker 406A For example, processor(s) 302 may alter the gain, compression, and frequency response of hearing instrument 200 for one or more sounds that resonant within a frequency band in response to the detected movements of marker 406A.

[0162] In some examples, processor(s) 302 automatically cause markers 406A to move as displayed on the UI. For example, processor(s) 302 may receive voice input from user 104 that causes the markers to move Processor(s) 302 may receive input from user 104, such as “better’ or “worse”, that may cause markers 406 to move in a direction that corresponds to the voice input of user 104. For example, user 104 may audibly state that they are unable to hear sounds in the low frequency range

Processor(s) 302, upon receiving this voice input, may automatically adjust one of markers 406 to a new position to account for user 104 being unable to hear sounds in the low frequency range. In such examples, processor(s) 302 or another device 106 paired to hearing instrument 200 can cause an audio signal to change based on the corresponding setting or hearing threshold that maps to the adjusted marker value as markers 406 automatically move to each new position. Processor(s) 302 may then produce an audio signal for the user 104 asking user 104 whether the sound is now better in the low frequency range. In an iterative fashion, user 104 may state yes or no, thereby causing processor(s) 302 to make further adjustments until user 104 is satisfied with the final profile setting or set of profile settings.

[0163] In a non-limiting example, processor(s) 302 may prompt user 104 to adjust one of markers 406. For example, processor(s) 302 may receive an indication from user 104 that user 104 is unable to hear sounds in the low frequency range (e.g., using voice input from user 104 or a questionnaire, etc.). As such, processors) 302 may prompt user 104 to adjust one of markers 406 that correspond to the low frequency range. For example, processor(s) 302 may cause markers 406 to blink or pulsate so as to indicate to user 104 which markers 406 should be adjusted to handle the problem user 104 is having with the low frequency range.

[0164] As processor( s) 302 register the movement of marker 406A (e.g., from 0 to 1), processor(s) 302 will update the gain, compression, and frequency response of an incoming audio signal or another device 106 will modify an outgoing audio signal being transmitted to proeessor(s) 302 or hearing instrument 200 [0165] In some examples, devices 106 may include devices configured to transmit media data to one or more hearing instruments 102. In turn, hearing instalments 102 may receive the media data transmitted from devices 106. Media data may include acoustic sound waves, waveforms, binary audio data, audio signals, radio waves, acoustic energy waveforms, etc. In such examples, devices 106 may be configured to condition the media data based on the environment, and/or profile settings of user 104 prior to or while transmitting the media data to hearing instruments 102. In a non-limiting example, a transmitter device, such as a dongle, may be plugged into one of devices 106, such as a television or smart television. The transmitter device may be configured to stream media data (e.g., audio signals) from one or more of devices 106 to hearing instalments 102. Processor(s) 302 may communicate tire environment, and/or profile settings of user 104 to the transmitter device or devices 106 directly. In some examples, processor(s) 302 may communicate marker values for frequency bands with devices 106 or the transmitter device, such that devices 106 or the transmitter device may identify hearing thresholds and/or determine profile settings using the mapping. [0166] In some examples, a file including the mapping may be transmitted from one of devices 106 (e.g., a mobile phone) or from hearing instruments 102 to the transmitter device (e.g., the dongle) or to another one of devices 106 that is paired to the transmitter device (e.g., a smart television). The transmitter device or one of devices 106 may read the file in order to access the mapping. In addition, the transmitter device or devices 106 may determine the environment of hearing instruments 102. The transmitter de v ice or devices 106 may pre-condition content of the media data based on the environment, and/or profile settings of user 104. In this way, hearing instruments 102 may not need to perform additional processing or conditioning to the media data upon receiving the media data from the transmitter device. Allowing other devices to pre-condition media data prior to or while transmitting media data to hearing instalments 102, in turn, allows hearing instruments 102 to conserve power resources for other tasks.

[0167] In some examples, there may be a time delay from when marker value adjustments are processed, for example, through one of devices 106 providing the UI, to when a profile setting may then be transmitted to hearing instruments 102. For example, a time delay may be present from when processors) 302 detect a change to the position of markers 406 to when processor(s) transmit a resultant profile setting to hearing instruments 102 and implement the profile seting on hearing instruments 102. In such instances, processor(s) 302 may selectively choose the source of sound for conducting an instance of a configuration process in such a way that avoids time delays as much as possible and maximizes efficiency with respect to hearing instruments 102 and processor(s) 302. For example, processor(s) 302 may select device 106A (e.g , a smartphone providing the UI) as the source of sound for conducting the configuration process. As such, processor(s) 302 may facilitate transmission of media data streamed from device 106A to hearing instruments 102.

[0168] In some examples, processor! s) 302 may pre-condition the media data used during the configuration process before or while transmitting the media data to hearing instruments 102. Processor(s) 302 may continue to determine preliminary profile settings while proeessor(s) 302 detect adjustments to marker values during the configuration process. For example, processors) 302 may determine new preliminary profile settings based on each individual adjustment to markers 406, but each new profile setting may or may not be satisfactory to user 104. Processors) 302 may pre-condition the media data regardless based on each new preliminary profile setting and transmit the pre-conditioned media data to hearing instruments 102. Hearing instruments 102 may not conduct any further conditioning of the received media data because processor(s) 302 have already pre-conditioned the media data.

Processor(s) 302 may detect from user 104 action requesting conclusion of the configuration process or that the currently modified condition of sound coming from hearing instruments 102 is satisfactory' to user 104. For example, processor(s) 302 may receive, via UI 400, an indication that the hearing instrument seting or setings are ready to be finalized or are to be finalized. Device 106A may transmit a final profile setting to hearing instruments 102 (e.g., as a configuration file).

[0169] In some examples, processors) 302 may generate an instruction to implement the setting and transmit the instruction to hearing instruments 102. Hearing instruments 102 may then implement the final profile setting, for example, by storing the final profile setting to storage device(s) 202. In such examples, user 104 need not wait for hearing instruments 102 to implement each of the potential ly numerous preliminary profile setings and instead may only need to wait for hearing instruments 102 to implement the final profile setting. In addition, allowing other devices to pre-condition media data prior to or while transmitting media data to hearing instruments 102, in turn, allows hearing instruments 102 to conserve power sources for other tasks. In addition, the configuration process may take less time due to the avoidance of time delays and as a result, processor(s) 302 may operate more efficiently to arrive at final profile settings.

[0170] In some examples, processor(s) 302 may apply a transfer function to account for differences in types of acoustic sources (e.g., live audio sources vs. streaming audio sources). Live audio sources may include live conversation or other live sounds (e.g., music, wind noise, machine noise, or some combination thereof), whereas streaming audio sources may include sources that receive audio data that has been transmitted from a transmitter device (e.g., one of devices 106). In some examples, processors) 302 may determine a profile setting based on streamed audio used during the configuration process, but user 104 may desire to implement the same profile seting to live audio, as well. In this way, processor(s) 302 may perform one configuration process to determine one profile seting or set of profile settings that may be applied differently based on the audio source and the transfer function.

[0171] In some examples, a transfer function may include a signal processing algorithm configured to condition audio signals based on source type in order to maintain uniformity of audio signals provided to the user regardless of source type used to perform the configuration process. The transfer function may include predefined conversion parameters and constants that may be applied to a profile setting. Thus, although processors) 302 may only be storing one set of profile settings, processor(s) 302, in some examples, may effecti vely apply two sets of profile settings based on output from the transfer function.

[0172] In an illustrative example, processor(s) 302 may apply a first set of profile settings to live audio signals and a second set of profile settings to streamed audio signals, where processors) 302 derived the second set of profi le settings from the first set of profile settings through application of the transfer function or where processors) 302 derived die first set of profile settings from the second set of profile settings through application of the transfer function. The directionality of the derivation may depend on what source processor(s) 302 used to perform the configuration process in the first place. For example, where processor(s) 302 used a streaming audio source to perform the configuration process, dien processor^) 302 may derive, through application of the transfer function, the first set of profi le settings for live audio signals from the second set of profile settings for streamed audio signals.

[0173] In addition, processors) 302 may select a transfer function from a set of transfer functions based on the type of audio signals detected during the configuration process (e.g , speech, music, wind noise, machine noise, or some combination thereof) or based on other characteristics of the incoming audio used to determine the first set of one or more profile settings. For example, processor^) 302 may detect five speech and/or wind noise when determining a first profile setting from live sources. As such, processor(s) 302 may determine a set of one or more characteristics of the incoming audio (e.g., that the audio includes speech and/or wind noise). Processor(s) 302 may determine a particular transfer function and/or tailor a set of parameters and/or constants for a particular transfer function based on the particular sounds or combination of sounds detected (e.g., the sound characteristics). As such, the transfer function may be based at least in part on the one or more characteristics.

[0174] In some examples, proeessor(s) 302 may determine a transfer function and/or apply the transfer function to the first profile setting in order to detennine a second set of one or more profile settings, the transfer function being based at least in part on tire set of one or more characteristics. For example, processor(s) 302 may apply the transfer function to one or more settings to determ ine a second set of one or more setings, transformed based on parameters, coefficients and/or constants specified by the transfer function being deployed. In some instances, the transfer function may be used to alter the initial mapping in a way that fits tire transfer function. For example, a marker adjustment from 0 to 1, may, based at least in part on the selected or identified transfer function, map to a dB value, which may or may not be distinct from the 30-dB HL as in the example of a 10-dB step-size and a 20-dB normal hearing threshold at a marker value of 0, the distinction depending on the transfer function that proeessor(s) 302 have identified or selected for the particular application.

[0175] In some examples, processor(s) 302 may receive a request from user 104 to further modify the final profile setting as needed via a new instance of the configuration process. For example, user 104 may determine at a later point in time that the profile setting requires fine-tune adjustments or micro-adjustments that allow processors) 302 to rely on previously determined profile setings to further enhance the profile seting without having to determine an enhanced profile setting from a clean slate. [0176] In some examples, proeessor(s) 302 may provide the ability for user 104 to make micro-adjustments using control indicators 402. For example, processor(s) 302 may first allow adjustments on a macro scale, where each individual increment (e.g., from 0 to 1) corresponds to a certain degree of change in hearing thresholds. In some examples, processor(s) 302 may register a gesture (e.g., pinching fingers inward or outward) that indicate to processor(s) 302 that the scale of control indicators 402 is to he adjusted. Processor(s) 302 can then adjust the view of control indicators 402 to a smaller, adjusted scale that allows for smaller scaled changes. For example, processor(s) 302 may be able to register fractional changes in marker values (e.g., from 1 to 1.2), where ordinarily, processor(s) 302 may only register integer changes. As such, processor(s) 302 may register incremental changes on a scale that provides a granular degree of control over changes to values of m arkers 406. In some examples, processor(s) 302 may interpolate or extrapolate hearing threshold values where the exact value (e.g., a fraction value) is not explicitly provided for in the mapping.

[0177] In one example, processors) 302 may determine a hearing threshold step-size that corresponds to a particular marker value adjustment (e.g., a standard marker value adjustment). For example, processors) 302 may determine that the hearing threshold step-size is X-decibeis (e.g., 10-dB, 20-dB, etc.) per standard marker value adjustment, where the standard marker value adjustment is a default scale of integer value increases. For example, processors) 302 may determine that the standard marker value adjustment is an integer value adjustment, such as 0 to 1 to 2 to 3, etc. In such examples, processor(s) 302 may be able to also provide an option for and detect micro-adjustments to marker values and identify hearing thresholds accordingly. In one example, processors) 302 may detect additional adjustments to one or more of markers 406 that correspond to changes in hearing thresholds that are less than, equal to, or greater than the hearing threshold step-size of X-decibels. For example, processor! s) 302 may detect fractional adjustments to markers 406, such as non-integer value adjustments, or larger adjustments, where some or all adjustment values may be less than, equal to or greater than the standard marker value adjustment (e.g., 0 to 0.5 to 1 to 1.5 to 2 to 3 to 4 to 6, etc.). In such examples, processor(s) 302 may detect adjustments that correspond to something different than the standard step-size. For example, a change from 0 to 0.5 may correspond to a change in hearing threshold from 20-dB to 25-dB, where the standard step-size is 10-dB for every standard integer change in marker value (e.g , from 0 to 1 to 2). In another example, a change from 4 to 6 may correspond to an increase of 20-dB HL, with value 5 in between. In some instances, processors) 302 may skip values (e.g., the value 5 in the previous example) or apply fractional adjustments, as proeessor(s) 302 detect movement of markers 406 in one direction (e.g., upward along control indicators 402), but may revert to a different adjustment scale (e.g., fractional, integer, etc.) when detecting movement of markers 406 in the other direction (e.g., downward along control indicators 402). In this way, user 104 may adjust markers 406 at a desirable rate in order to allow processor(s) 302 to efficiently identify hearing thresholds.

[0178] Processor(s) 302 may invoke fractional or micro-adjustments in response to receiving input from user 104, such as a user request to make fine-tune adjustments, or may do so automatically, for example, in response to identifying a need to provide micro-adjustments with respect to one or more control indicators 402. In some instances, processor(s) 302 may determine the need for micro-adjustments based on a rate at which user 104 causes markers 406 to move along control indicators 402. For example, processor(s) 302 may detect slow, incremental adjustments to markers 406 upward along control indicators 402, processors) 302 may determine that micro adjustments may be beneficial and invoke micro-adjustments as such. On the other hand, processor(s) 302 may detect fast adjustments to markers 406 indicating that micro-adjustments may be unnecessary.

[0179] User 104 may perform micro-adjustments to a preliminary profile setting or a final profile setting after having setled on the preliminary profile setting or final profile setting from UI 400. In some examples, processor(s) 302 may have saved a profile setting to storage device(s) 316. Processor! s) 302 may receive an indication that user 104 would like to perform micro-adjustments to the seting of hearing instrument 200. User 104 may provide such an indication at any time, for example, hours, days, or weeks after settling on a hearing instrument setting. In addition, user 104 may only want to perform micro-adjustments for a particular environment or for a particular hearing instrument 200 (e.g., left or right).

[0180] In some examples, processor 302 may present the previous or current profile setting to user 104 via UI 400. For example, processors) 302 may provide UI 400 to user 104 via display screen 312, where markers 406 are preset along control indicators 402 based on the previously determined profile seting. Processor(s) 302 may receive input from user 104 via input device(s) 308. Processor(s) 302 may determine that the input corresponds to a fractional or micro-adjustment to one of markers 406. As such, processor(s) 302 be configured to detect micro-adjustments to a first marker 406A.

[0181] In some examples, processor(s) 302 may identify various step-sizes for markers 406. A step-size may determine tire difference in hearing threshold between one marker value to a next marker value. In some examples, the step-size may depend on the corresponding frequency bands. For example, marker 406A may provide a different step-size than marker 406B. In a non-limiting example, processor(s) 302 may register a 10-dB change in hearing threshold upon detecting an adjustment of marker 406 A from 0 to 1, but may register a 20-dB change in hearing threshold upon detecting an adjustment of marker 406B from 0 to 1.

[0182] In some examples, processor(s) 302 may base the step-size on just-noticeable difference (JND) values that may be known for user 104. JND values specify the degree to which sound parameters must change in order for user 104 to perceive the change. In some examples, processors) 302 may leain JND values for user 104 (e.g., through a machine learning or artificial intelligence algorithm) and store JND values in storage device(s) 2.02 or storage device(s) 316. In a non-limiting example, where processor(s) 302 are configured to adjust the hearing level for an entire audio signal, user 104 may perceive the change sooner relative to an adjustment to the profile setting for only a small segment of the frequency response. In such examples, where processor(s) 302 adjust the level for only a small segment of the frequency response, processor(s) 302 may implement a larger step-size.

[0183] Processor(s) 302 may update the profile setting based on the micro-adjustment based on the newly adjusted marker values and corresponding hearing thresholds. Processor(s) 302 may update setings for any number of environments or for particular frequency bands within an environment. For example, processor(s) 302 may only present the option to update a profile setting with respect to the low frequency band.

This may occur when processor(s) 302 receive an indication that user 104 is dissatisfied with the way in which processor(s) 302 are conditioning low frequency sound in a particular environment. In some examples, processors) 302 may aid or guide user 104 in determining how a setting may need to be updated. For example, processor(s) 302 may implement a process-of-elimination type strategy that allows user 104 to hone-in on where a problem might exist for a profile setting.

[0184] In some examples, processor(s) 302 may present an option on UI 400 that allows user 104 to toggle between adjustment scales (e.g., macro-, micro-, or normal-scaled adjustments). For example, processor(s) 302. may register selection of an option to adjust marker values on a micro-scale. In such instances, processor(s) 302 may detect selection of one of markers 406. Upon selection, processor(s) 302 may automatically provide a zoomed-in view of control indicators 402. Upon detecting movement of markers 406, processor(s) 302 may update the marker values on UI 400. In some examples, processor(s) 302 may continue to outwardly present the changes as integer values (e.g., 0 to 1, 1 to 2), but in reality, may be registering changes on a smaller scale (e.g., 0 to 0.1, 0.1 to 0.2). For example, processors) 302 may perform an automatic conversion of changes in marker values based on the scale processor(s) 302 are using to elicit input from user 104. In other examples, processors) 302 may present the changes in marker values based on the actual change. For example, micro-adjustments to marker value 406B may read as actual changes from 3 to 3.05 to 3 10, etc., rather than as changes from 3 to 4 to 5. The mapping algorithm may take into account either scenario using correction factors or other techniques that take these changes into account. These micro adjustments may result in new marker values. Processor(s) 302 may reference the new marker value in the mapping as described herein.

[0185] In some examples, the hearing thresholds may need to be interpolated or extrapolated from the data, where the values are not explicitly provided for in the mapping algorithm. For example, the mapping may have a first setting that corresponds to a value of 4 and another that corresponds to a value of 5. A micro-adjustment may- result m a value of 4.3, in which case a new setting may be interpolated between the two known data points. In some examples, the position of markers 406 may be relative to a setting that was previously set, in which case, user 104 can fine-tune the previous setting.

[0186] FIG. 10 is a flowchart illustrating an example operation 1000 of this disclosure. Other examples of this disclosure may include more, fewer, or different actions. In the example of FIG 10, computing system 108 may determine a setting for a hearing instrument, such as one of hearing instruments 102, using input received from user 104 via a UI. [0187] In the example of FIG. 5, computing system 108 may present a UI to user 104 (1000) Tire UI may be similar to that of FIGS. 4A, 8.4, 9, or any other UI that is able to solicit or elicit input from user 104 regarding adjustable marker values. In some examples, processors) 302 may provide UI 400 by one of devices 106 configured to interface with hearing instalments 102. In another example, processors) 208 may provide UI 400, for example, on one of devices 106, such as by generating user interface data for one of devices 106. In other words, one of devices 106 configured to interface with hearing instruments 102 may provide the user interface (e.g., UI 400). In a non-limiting example, UI 400 may be presented to user 104 on one of devices 106 paired to one or more hearing instruments 102.

[0188] The user interface, such as UI 400, may include one or more control indicators 402 that each correspond to a frequency band. In another example, the user interface, such as UI 400, may include one or more control indicators 402 that each correspond to a frequency region. As discussed above, a single frequency band (e.g., 1000-2000 Hz) may include multiple frequency regions, discrete frequencies, or any combination thereof. In some examples, a frequency region may include one or more discrete frequencies within or adjacent to a frequency band, such as within 50,

100, 200, etc. Hz on either side of the frequency band. In some instances, the frequency region adjacent to a frequency band may be determined based on an adjacent frequency band or frequency region, such that the settings may account for as much of the entire frequency spectrum of normal human hearing as is possible.

[0189] Control indicators 402 may each include markers 406 that are individually positioned along control indicators 402 to indicate marker values. In some examples, control indicators 402 and markers 406 may be integrated as a single interactive graphical unit. In other examples, control indicators 402 and markers 406 may all be separate interactive graphical units. Processors) 302. may overlay the interactive graphical units as layered constructs of UI 400.

[0190] Processor(s) 302 may determine initial marker values for markers 406 along control indicators 402 of UI 400 (1002). As discussed above, processor(s) 302 may set initial values for markers 406 to 0 by default. In some examples, processor(s) 302 may preset one or more of markers 406 at certain non-zero positions. For example, user 104 may have already saved a setting for markers 406 but would like to make further adjustments or micro-ad j ustments. Processor(s) 302 may preset one or more of markers 406 to positions that correspond to the previously saved setting

[0191] Processor(s) 302 may have stored the marker values or may perform a back calculation from the setting to derive tire marker values. In some examples, processor(s) 302 may calculate assumed hearing thresholds based on information regarding user 104 (e.g., age, gender, history, etc.) and/or on a detected environment. Processor(s) 302 may set markers 406 at initial starting points above 0 based on the assumed hearing thresholds. For example, where user 104 used hearing instruments 102 in the past, processor(s) 302 may attempt to approximate the setting of the past hearing instruments 102 of user 104 and provide an initial position for markers 406. As such, processors) 302 may determine an initial marker value for one of control indicators 402 based at least in part on the initial position of one of markers 406. For example, the initial position may be 0 by default.

[0192] In some examples, processor(s) 302 may determine that a change in state has occurred with respect to the initial marker value. For example, processor(s) 302 may detect that user 104 has manipulated a marker in some way. In a non-limiting example, processor(s) 302 may register a change in state as soon as user 104 provides input, such as a touch input, that selects one of markers 406 for manipulation.

[0193] Processor(s) 302 may determ ine adjusted values of markers 406 (1004). For example, processor(s) 302 may detect that user 104 has manipulated marker 406B for control indicator 402 B to a new position along control indicator 402 B via UI 400. Processor(s) 304 may determine an adjusted marker value for control indicator 402B based at least in part on the adjusted position of marker 406B For example, processor(s) 302 may register a change in value of marker 406B as marker 406 moves along control indicator 402B from 0 to 1, 1 to 2, and 2 to 3.

[0194] In some examples, processor(s) 302 may register marker values by determining when to increment a marker value and may commit the value to a specified register location in storage device(s) 316. For example, processor^) 302 may determine that the value of marker 406B corresponds to a value of 3 based on tire adjusted position of marker 406B.

[0195] In some examples, processor(s) 302 may receive a manual input of a marker value. For example, processors) 302 may provide a tillable field 408A-408N via UI 400 that corresponds to one or more frequency bands. Processor(s) 302 may register a value inputed via fillable field 408A-408N and use the input as adjusted values for subsequent use in determining a hearing threshold or profile setting Processor(s) 302 may transmit the adjusted values to hearing instrument 200 or another computing device 300, where those values may be processed or stored upon receipt. A person of skill in the art would understand that fillable field 408A-408N is optional and that other UI modes may be provided in order to elicit input from user 104.

[0196] In some examples, processor(s) 302 may receive a command signaling that user 104 is done making adjustments, either temporarily or permanently, before processor(s) 302 register the adjusted values. For example, UI butons may be provided on UI 400 that user 104 may activate to communicate at what stage user 104 is in the configuration process. In other examples, processor(s) 304 may register that marker 406B has been idle for a predetermined amount of time (e.g., 0.5 seconds,

1 second, 2 seconds, etc.) before processor(s) 302 register the adjusted values.

[0197] In some examples, processor(s) 302 may automatically control adjustment of the marker positions based on feedback or input from user 104. In some examples, processor(s) 302 may cause markers 406 to move or stop moving in response to input received from user 104. For example, input device 308 may detect input from user 104 (e.g., hand gesture, eye movement, head movement, etc.). Input device 308 may relay the detected input to processors) 302. For example, input device 308 may relay coordinates of a touch input, a length of time a touch input occurred, directionality of the touch input, an amount of pressure applied, and so forth. Processor s) 302 may use the input information to identify a corresponding action that processor s) 302 are to take. For example, processor(s) 302 may reference a file stored in application module 322A that maps the input information to actions. In some examples, such a file includes logical arguments, decision trees, and so forth.

[0198] In an illustrative and non-limiting example, input device 308 may detect a first gesture (e.g., a hand gesture) that causes processor(s) 302 to portray marker 406A as moving in a first direction (e.g., upward along control indicator 402A). Input device 308 may detect a second gesture that causes processors) 302 to cease movement of marker 406A. Input device 308 may detect a third gesture that causes processor(s) 302 to portray marker 406A as moving in a second direction (e.g., downward along control indicator 402A). In some examples, a single gesture may cause processor(s) 302. to portray multiple markers 406 as moving. Processor(s) 302 may adjust sound input based on the movement of markers 406. For example, processor(s) 302 may pre-condition sound that is to be transmitted to hearing instruments 102.

[0199] User 104 may select from UI 400 that user 104 is complete with the ad j ustment process, for example, by clicking a 'DONE’ key (not shown) on UI 400.

Processors) 302 may receive the selection from user 104 via UI 400. Processor(s) 302 may store the seting as a profile, memory', profile setting, or memory setting. In some examples, processor(s) 302 may update one or more profiles depending on the circumstances based on the setting. Processors) 302 may toggle between profiles depending on tire circumstances or environment of user 104. In some examples, selecting the ‘DONE’ key may signal to computing system 108 to perform accessing tire mapping, identifying the hearing threshold, and/or determining the seting. In some examples, computing system 108 may not identify the hearing threshold, determine the setting and/or access the mapping until an affirmative action is taken by user 104 and in some cases, one or more of: identifying the hearing threshold, determining the seting and/or accessing the mapping may be optional. By indicating that user 104 is complete, the setting may' be transmitted to the hearing instrument for final installation following identifying the hearing threshold and/or determining the profile setting.

[0200] Processors) 302 may then access a mapping of hearing thresholds that map to marker values (1006). The mapping may be stored in memory'· of one or more of computing devices 106 or hearing instrument 200. In some examples, processor(s) 302 may access a mapping that identifies one or more relationships between marker values and hearing thresholds. For example, the mapping may link marker values and adjusted marker values to hearing thresholds. In some examples, a single mapping may link marker values to hearing thresholds with respect to a particular frequency band.

[0201] In an illustrative example, a first mapping may correspond to a first frequency band, whereas a second mapping may correspond to a second frequency band. In other examples, a single mapping may link marker values to hearing thresholds across a range of frequencies. For example, a single mapping may link marker values to hearing thresholds with respect to a plurality of frequency bands. In other examples, separate mappings may link marker values to hearing thresholds with respect to each of a plurality of frequency bands. [0202] In some examples, proeessor(s) 302 may estimate hearing threshold data points based on the mapping where certain data points are not provided for explicitly in the mapping hut may he determined based on other data points of the mapping. In some examples, processor(s) 302 may use interpolation or extrapolation techniques to estimate a hearing threshold data point missing from the one or more mappings. For example, proeessor(s) 302 may estimate, from the one or more mappings, a hearing threshold data point from one or more other hearing threshold data points. This estimation may be done in order to identify the hearing threshold that corresponds to an adjusted marker value.

[0203] Processor(s) 302 may use the mapping to identify a hearing threshold value that corresponds to the adjusted marker value (1008). In some examples, processor(s) 302 may identify a plurality of hearing threshold values that correspond to a single adjusted marker value. For example, processor(s) 302 may interpolate or extrapolate hearing threshold values for frequencies of a particular frequency band. In another example, processor(s) 302 may determine a plurality of hearing threshold values for a plurality of adjusted markers 406. In a non-limiting example with reference to FIG. 5A, processor(s) 302 may identify from the mapping a first hearing threshold value that corresponds to the adjusted position of marker 406B or the adjusted marker value. In addition, processors) 302 may identify from the mapping a second, a third, and a fourth hearing threshold value based on the adjusted positions of marker 406N. In some examples, processors) 302 may identify a default hearing threshold value that corresponds to an unadjusted position of marker 406A. In such examples, processor(s) 302 may select the default hearing threshold based on programming instructions stored on storage device(s) 202 or storage device(s) 316.

[0204] In an illustrative example, a device manufacturer or user 104 may generate and store such programming instructions based on the results of a listening task (e.g., in which user 104 selects from a certain number of pre-configured defaults (e.g., 3 or 4)), answers to certain survey questions (e.g., “Do you currently use hearing instruments?”), pooled historical data for other individuals who have already determined their profile settings, or some combination thereof. In addition, proeessor(s) 302 may identify from the mapping a hearing threshold value that corresponds to the adjusted position of one of markers 406 based on the frequency band or frequency to which the particular marker corresponds. For example, the mapping may compartmentalize hearing threshold values for each frequency band or frequency region, such that only certain compartments may be accessed, referenced, or utilized based on which of markers 406 are adjusted. In a non-limiting example, processor(s) 302 may register an adjustment to marker 406A and as such, may access, reference, or utilize a mapping for a particular frequency band that corresponds to marker 406A (e.g., low frequency band).

[0205] Processor(s) 302 may use the one or more hearing threshold values to determine a profile setting for hearing instalment 200 (1010). In some examples, processor(s) 302 may determine one or more settings to configure hearing instalment 200 based at least in part on the hearing threshold. Processor(s) 302 may use a standard fit formula to convert the identified hearing threshold to a setting for hearing instrument 200. In some examples, the setting includes a combined setting for the hearing thresholds at the various frequency bands. In some examples, processor(s) 302 may determine multiple settings. For example, in response to receiving an indication to further adjust marker 406A, processor(s) 302 may determine a second adj usted marker value for marker 406A or for markers 406 as a whole. Processor(s) 302 may then update the setting based at least in part on the second adjusted marker value.

[0206] In some examples, processor(s) 302 may transmit the hearing threshold values to another computing de vice 300 or to hearing instrument 200 for processing. For example, processor(s) 302 may transmit the hearing threshold values to hearing instrument 200 or to a remote server, where processor(s) 208 or the remote server may determine the setting for hearing instrument 200 based on the received hearing threshold values. In some examples, computing device 300 may determine the setting and transmit instructions to implement the setting to one or more of hearing instruments 102. In other examples, computing device 300 may transmit instructions to another computing device 300, such as a smart television or vehicle, that may be paired to hearing instruments 102. Computing device 300 may implement the setting on computing device 300 and pre-condition audio signals before transmitting the audio signals to hearing instruments 102.

[0207] In some examples, the device receiving the setting for hearing instrument 200 may subsequently store the setting for hearing instrument 200 to a memory device. For example, any one of devices 106 or hearing instruments 102 may receive the setting for hearing instalment 200 and subsequently store the setting for one or more of hearing instruments 102 to a memos)' device, such as storage device(s) 202 or storage device(s) 316. In another example, proeessor(s) 302 may determine multiple settings for one of hearing instruments 102 and subsequen tly cause any one of devices 106 or hearing instruments 102 to store a first set of the multiple settings to a first memory device, such as storage device(s) 316. In addition, processors) 302 may cause one of devices 106 or hearing instruments 102 to store a second set of multiple settings to a second memory device, such as storage device(s) 202.

[0208] In some examples, storing a setting may include storing a profile setting to cache memory or some other RAM or may include storing a profile seting to ROM depending on processing instructions. In anon-limiting example, processor(s) 302 may generate instructions that cause preliminary profile settings to be stored in RAM and final profile settings to be stored in ROM or transferred from RAM to ROM. In addition, processor(s) 302 may generate instructions that cause the profile seting to be stored in a cloud storage device, either exclusively or as another copy of the profile setting, for subsequent access.

[0209] FIG. 11 is a flowchart illustrating an example operation 1100 of this disclosure. Other examples of this disclosure may include more, fewer, or different actions. In the example of FIG. 11, hearing instrument 200 or device 300 may receive information transmitted from another device 106. The other device 106 may be configured to present UI 400 to user 104. For example, device 106 may have a configuration application as one of the application module(s) 322 that is configured to present UI 400 to user 104.

[0210] Hearing instrument 200 or de vice 300 may receive a set of marker values from device 106 (1100). The marker values correspond to the position of markers 406 along control indicators 402.

[0211] Hearing instrument 200 or device 300 may access the mapping of hearing thresholds (1102). The mapping may be stored on storage device(s) 2.02 or on storage device(s) 316.

[0212] Hearing instrument 200 or device 300 may identify the hearing threshold that corresponds to the marker values (1104). This may be done for any number of control indicators 402, control indicators 802, or markers 914. For example, a hearing threshold may be determined for each adjusted or non-adjusted marker value for each of control indicators 402, control indicators 802, or markers 914. [0213] Hearing instrument 200 or device 300 may determine one or more profile settings based on the determined hearing thresholds (1106). The profile setting may be a combination of gain, compression and frequency response parameters for a particular frequency band. In some examples, hearing instrument 200 or device 300 may indi vidually determine multiple profile settings with respect to each of control indicators 402 (e.g., one profile setting for each frequency band). In some examples, when processor(s) 302 detect an adjustment to markers 406, processor(s) 302 cause an adjustment to a plurality of sound parameters of hearing instruments 102 within a single frequency region or band. For example, adjusting a first marker results in processor(s) 302 adjusting the sound parameters for multiple frequencies. In another example, adjusting a first marker may result in processor(s) 302 determining an adjustment to a plurality of sound parameters at least with respect to a single frequency band. For example, adjusting a first marker may result in processor(s) 302 determining an adjustment to a plurality of sound parameters within a single frequency region or band.

[0214] In some examples, processor(s) 302 may detect an adjusted position of marker 406A. In response to detecting the adjusted position, processors) 302 may control or adjust sound parameters for the frequency region or band that corresponds to control indicator 402A, as well as control sound parameters for frequencies that are adjacent or related to the frequency region or band that corresponds to control indicator 402A (e.g., through extrapolation techniques). In some examples, the sound parameters include one or more of gain, frequency response, compression, etc. for hearing instruments 102.

[0215] In examples involving device 300 determining the profile settings, device 300 may transmit the profile settings to the hearing instrument 200 (1108). Hearing instrument 200 may implement the profile settings, for example, by activating the profile settings so that audio signals received through the hearing instrument 200 are conditioned based on the determined profile setings. In examples where the hearing instrument 200 itself determines the profiles settings, then transmitting the profile settings to hearing instrument 200 is unnecessary in some examples, the profile settings may be transmitted externally to another computing device 300 (e.g., a cloud server) for storage and later retrieval (e.g., from a network). The settings may be stored and accessed at-will or automatically depending on the circumstances and environment surrounding device 300 or hearing instrument 200.

[0216] In some instances, processors) 302 may use marker values to determine profile settings directly, without intermediately identifying hearing thresholds. For example, proeessor(s) 302 may identify a relationship between marker values and hearing instrument settings (e.g., gains). Processors) 302 may be configured to identify the relationship (e.g., through ML models) or may receive such relationship information from an external source (e.g., a manufacturer, programmer, etc.). In some instances, the relationship may be based on processor(s) 302 observing how marker values map to hearing thresholds, which then map to profile settings, and once identifying that a relationship has been established between marker values and profile settings, processor(s) 302 may directly map marker values to profile setings. In some examples, a particular fitting formula may prescribe different settings (e.g., different gain) for different frequencies, regardless of whether the hearing thresholds are the same for those different frequencies. In such instances, processor(s) 302 may determine a frequency-specific mapping between marker values and profile settings, once identifying that a relationship has been established between marker values and profile setings.

[0217] The present disclosure includes the following examples:

[0218] Example 1: A method including: providing a user interface by a device configured to interface with a hearing instrument, the user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values; determining an initial marker value for a first control indicator based at least in part on an initial position of a first marker: determining that a change in state has occurred with respect to the initial marker value; determining a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker; accessing a mapping that identifies tire one or more relationships between marker values and hearing thresholds; identifying, from the mapping, a hearing threshold that corresponds to the fi rst adjusted marker value; determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold; and storing the one or more setings for the hearing instrument to a memory device. [0219] Example 2: A method according to Example 1, further including : adjusting a number of control indicators based on input received from a user

[0220] Example 3: A method according to Example 2, wherein the input received from the user includes an indication that sound corresponding to at least one of the frequency bands is not satisfactory.

[0221] Example 4: A method according to any combination of Examples 1 through 3, further including: registering a gesture that indicates that a scale of the control indicator is to be adjusted.

[0222] Example 5: A method according to any combination of Examples 1 through 4, further including: registering the adjusted marker value as a fractional change from the initial marker value.

[0223] Example 6: A method according to any combination of Examples 1 through 5, wherein the control indicators include interactive graphical units presented to a user via the user interface, wherein the markers are configured to be slid along the control indicators.

[0224] Example 7: A method according to any combination of Examples 1 through 6, wherein the mapping links marker values to hearing thresholds with respect to die frequency band that corresponds to the first control indicator.

[0225] Example 8: A method according to any combination of Examples 1 through 7, wherein the mapping links marker values to hearing thresholds with respect to each of the frequency bands that correspond to the plurality of control indicators.

[0226] Example 9: A method according to any combination of Examples 1 through 8, wherein identifying the hearing threshold further includes estimating, from the mapping, a hearing threshold data point from one or more other hearing threshold data points. [0227] Example 10: A method according to any combination of Examples 1 through 9, further including: in response to receiving an indication to further adjust the first marker, determining a second adjusted marker value; and updating the one or more settings based at least in part on the second adjusted marker value.

[ 0228] Example 11: A method according to any combination of Examples 1 through 10, further including: receiving, via the user interface, an indication that the one or more settings are ready to be finalized; and generating an instruction to implement the one or more settings. [0229] Example 12: A method according to any combination of Examples 1 through 1 !, wherein the hearing instrument includes a left hearing instrument, and wherein a second of one or more settings is determined for a right hearing instrument. [0230] Example 13: A method according to any combination of Examples 1 through 12, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to a particular frequency band.

[0231] Example 14: A method according to any combination of Examples 1 through 13, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument within a single frequency region or band. [0232] Example 15: A method according to Example 14, wherein the sound parameters include one or more of: gain, frequency response, and compression for the hearing instrument.

[0233] Example 16: A method according to any combination of Examples 1 through 15, further including: providing a second user interface identifying at least one additional control indicator and at least one additional marker; detecting the adjustment to the at least one additional marker; and updating the one or more settings in response to detecting the adjustment to the at least one additional marker.

[0234] Example 17: A method according to any combination of Examples 1 through 16, further including: determining a hearing threshold step-size that corresponds to a particular marker value adjustment; and detecting an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.

[0235] Example 18: A method according to any combination of Examples 1 through 17, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where ‘"X” corresponds to a predetermined decibel value.

[0236] Example 19: A method according to any combination of Examples 1 through 18, further including: applying a transfer function to the one or more settings to determine a second set of one or more settings.

[0237] Example 20: A method according to Example 19, further including: determining one or more characteristics of incoming audio, wherein the transfer function is based at least in part on the one or more characteristics. [0238] Example 21 : A method according to any combination of Examples 1 through 20, wherein the frequency bands are delimited as frequencies of sounds that a user is likely to encounter in a particular environment.

[0239] Example 22: A method according to any combination of Examples 1 through 21, wherein the method is performed by a personal computing device.

[0240] Example 23: A method according to any combination of Examples 1 through 22, wherein the personal computing device includes the memory device that stores the one or more settings.

[0241] Example 24: A method according to any of Examples 22 or 23, wherein the personal computing device includes the mapping.

[0242] Example 25: A method according to any combination of Examples 1 through 24, further including: receiving media data from another device.

[0243] Example 26: A method according to any combination of Examples 1 through 25, further including: transmiting the one or more settings.

[0244] Example 27: A device configured to determine hearing instrument settings, the device including: a memory configured to store a mapping that identifies one or more relationships between marker values and hearing thresholds; and one or more processors coupled to the memory, and configured to: provide a user interface including a plurality of control indicators that each correspond to a frequency band, the control indicators each including markers that are individually positioned along the control indicators to indicate marker values; determine an initial marker value for a first control indicator based at least in part on an initial position of a first marker; determine that a change in state has occurred with respect to the initial marker value; determine a first adjusted marker value for the first control indicator based at least in part on an adjusted position of the first marker; access the mapping that identifies the one or more relationships between marker values and hearing thresholds; identify, from the mapping, a hearing threshold that corresponds to the first adjusted marker value; and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.

[0245] Example 28: A device according to Example 27, wherein the device is further configured to adjust a number of control indicators based on input received from a user. [0246] Example 29: A device according to Example 28, wherein the input received from the user includes an indication that sound corresponding to at least one of the frequency bands is not satisfactory.

[0247] Example 30: A de vice according to any combination of Examples 27 through 29, wherein the device is further configured to register a gesture that indicates that a scale of the control indicators is to be adjusted.

[0248] Example 31: A device according to any combination of Examples 27 through 30, wherein the adjusted marker value is registered as a fractional change from the initial marker value

[0249] Example 32: A device according to Example 31, wherein the device is further configured to store the one or more settings for the hearing instrument.

[0250] Example 33: A device according to any combination of Examples 27 through 32, wherein the device is a personal computing device.

[0251] Example 34: A device according to any combination of Examples 27 through 33, wherein the device is further configured to transmit the mapping

[0252] Example 35: A device according to any combination of Examples 27 through 34, wherein the device is further configured to receive media data from another device.

[0253] Example 36: A device according to any combination of Examples 27 through 35, wherein the device is further configured to transmit the one or more settings.

[0254] Example 37: A device according to any combination of Examples 27 through 36, wherein the device is further configured to: provide a second user interface identifying at least one additional control indicator and at least one additional marker; detect an adjustment to the at least one additional marker; and update the one or more settings in response to detecting the adjustment to the at least one additional marker. [0255] Example 38: A device according to any combination of Examples 27 through 37, wherein the device is further configured to: determine a hearing threshold step-size that corresponds to a particular marker value adjustment; and detect an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size. [0256] Example 39: A device according to any combination of Examples 27 through 38, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where "‘X” corresponds to a predetermined decibel value.

[0257] Example 40: A device according to any combination of Examples 27 through 39, wherein the device is further configured to apply a transfer function to the one or more settings to determine a second set of one or more settings.

[0258] Example 41: A device according to Example 40, wherein the device is further configured to determine one or more characteristics of incoming audio, wherein the transfer function is based at least in part on the one or more characteristics.

[0259] Example 42: A device according to any combination of Examples 27 through 41, wherein the frequency bands are delimited as frequencies of sounds that a user is likely to encounter in a particular environment.

[0260] Example 43: A device according to any combination of Examples 27 through 42, wherein the control indicators take the form of interactive graphical units presented to a user via the user interface, wherein the markers are configured to be slid along the control indicators.

[0261] Example 44: A device according to any combination of Examples 27 through 43, wherein the mapping links marker values to hearing thresholds with respect to the frequency band that corresponds to the first control indicator.

[0262] Example 45: A device according to any combination of Examples 27 through 44, wherein the mapping links marker values to hearing thresholds with respect to each of the frequency bands that correspond to the plurality of control indicators. [0263] Example 46: A device according to any combination of Examples 27 through 45, wherein the device is further configured to: estimate, from the mapping, a hearing threshold data point from one or more other hearing threshold data points. [0264] Example 47: A device according to any combination of Examples 27 through 46, wherein the device is further configured to: in response to receiving an indication to further adjust the first marker, determine a second adjusted marker value; and update the one or more settings based at least in part on the second adjusted marker value.

[0265] Example 48: A device according to any combination of Examples 27 through 47, wherein the device is further configured to: receive, via the user interface, an indication that the one or more settings are ready to be finalized: and generate an instruction to implement the setting. [0266] Example 49: A device according to any combination of Examples 2.7 through 48, wherein the hearing instrument includes a left hearing instrument, and wherein a second of one or more settings is determined for a right hearing instrument. [0267] Example 50: A de vice according to any combination of Examples 27 through 49, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to a particular frequency band.

[0268] Example SE A de vice according to any combination of Examples 27 through 50, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument within a single frequency region or band. [0269] Example 52: A device according to Example 51, wherein the sound parameters include: gain, frequency response, and/or compression for the hearing instrument.

[0270] Example 53: A method including: providing a user interface by a device configured to interface with a hearing instrument, the user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker positioned along the control indicator to indicate a marker value; determining an initial marker value for the control indicator based at least in pari on an initial position of the marker; determining an adjusted marker value for the control indicator based at least in part on an adjusted position of the marker; accessing a mapping that identifies one or more relationships between marker values and hearing thresholds; identifying, from the mapping, a hearing threshold that corresponds to the adjusted marker value; determining one or more settings to configure the hearing instrument based at least in part on the hearing threshold; and storing the one or more settings for the hearing instrument to a memory device.

[0271] Example 54: A method according to Example 53, further including: adjusting a number of control indicators to include a plurality of control indicators based on input received from a user.

[0272] Example 55: A method according to Example 54, wherein the input received from the user includes an indication that sound corresponding to the frequency band is not satisfactory.

[0273] Example 56: A method according to any combination of Examples 53 through 55, further including: registering a gesture that indicates that a scale of the control indicator is to be adjusted. [0274] Example 57: A method according to any combination of Examples 53 through 56, further including: registering the adjusted marker value as a fractional change from the initial marker value.

[0275] Example 58: A method according to any combination of Examples 53 through 57, wherein the control indicator is represented by an interactive graphical unit presented to a user via the user interface, wherein the marker is configured to he slid along the control indicator.

[0276] Example 59: A method according to any combination of Examples 53 through 58, wherein the mapping links marker values to hearing thresholds with respect to the frequency band.

[0277] Example 60: A method according to any combination of Examples 53 through 59, wherein identifying the hearing threshold further includes estimating, from the mapping, a hearing threshold data point from one or more other hearing threshold data points.

[0278] Example 61 : A method according to any combination of Examples 53 through 60, further including: in response to receiving an indication to further adjust the first marker, determining a second adjusted marker value; and updating the one or more settings based at least in part on the second adjusted marker value.

[0279] Example 62: A method according to any combination of Examples 53 through 61, further including: receiving, via the user interface, an indication that the one or more settings are ready to be finalized; and generating an instruction to implement the one or more settings with respect to the hearing instrument.

[0280] Example 63: A method according to any combination of Examples 53 through 62, wherein the hearing instrument includes a left hearing instrument, and wherein one or more right hearing instrument settings are determined for a right hearing instrument.

[0281] Example 64: A method according to any combination of Examples 53 through 63, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to the frequency band.

[0282] Example 65: A method according to any combination of Examples 53 through 64, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of tire hearing instrument within the frequency band. [0283] Example 66: A method according to Example 65, wherein the sound parameters include one or more of: gain, frequency response, and/or compression for the hearing instrument.

[0284] Example 67: A method according to any combination of Examples 53 through 66, further including: providing a second user interface identifying at least one additional control indicator and at least one additional marker; detecting the adjustment to the at least one additional marker; and updating the one or more settings in response to detecting the adjustment to the at least one additional marker.

[0285] Example 68: A method according to any combination of Examples 53 through 67, further including: determining a hearing threshold step-size that corresponds to a particular marker value adjustment; and detecting an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.

[0286] Example 69: A method according to any combination of Examples 53 through 68, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where “X” corresponds to a predetermined decibel value.

[0287] Example 70: A method according to any combination of Examples 53 through 69, further including: applying a transfer function to the one or more setings to determine a second set of one or more settings.

[0288] Example 71: A method according to Example 70, further including: determining one or more characteristics of incoming audio, wherein the transfer function is based at least in part on the one or more characteristics

[0289] Example 72: A method according to any combination of Examples 53 through 71, wherein tire frequency band is delimited as frequencies of sounds that a user is likely to encounter in a particular environment.

[0290] Example 73: A method according to any combination of Examples 53 through 72, wherein the method is performed by a personal computing device.

[0291] Example 74: A method according to Example 73, wherein tire personal computing device includes the memory device that stores the one or more setings. [0292] Example 75: A method according to any of Examples 73 or 74, wherein the personal computing device includes the mapping. [0293] Example 76: A method according to any combination of Examples 53 through 75, further including: receiving media data from another device.

[0294] Example 77: A method according to any combination of Examples 53 through 76, further including: transmitting the one or more settings.

[0295] Example 78: A device configured to determine hearing instrument settings, the device including: a memory configured to store a mapping that identities one or more relationships between marker values and hearing thresholds; and one or more processors coupled to the memory ' , and configured to: provide a user interface including a control indicator that corresponds to a frequency band, the control indicator including a marker that is positioned along the control indicator to indicate a marker value; determine an initial marker value for the control indicator based at least in part on an initial position of the m arker; determine that a change in state has occurred with respect to the initial marker value; determine an adjusted marker value for the control indicator based at least in part on an adjusted position of the marker; access the mapping that identifies the relationship between marker values and hearing thresholds; identify, from the mapping, a hearing threshold that corresponds to the adjusted marker value; and determine one or more settings to configure the hearing instrument based at least in part on the hearing threshold.

[0296] Example 79: A device according to Example 78, wherein the device is further configured to adjust a number of control indicators based on input received from a user. [0297] Example 80: A device according to Example 79, wherein the input received from the user includes an indication that sound corresponding to the frequency band is not satisfactory.

[0298] Example 81: A device according to any combination of Examples 78 through 80, wherein the device is further configured to register a gesture that indicates that a scale of the control indicators is to be adju sted.

[0299] Example 82: A device according to any combination of Examples 78 through 81, wherein the adjusted marker value is registered as a fractional change from the initial marker value.

[0300] Example 83: A device according to Example 82, wherein the device is further configured to store the one or more setings for the hearing instrument.

[0301] Example 84: A de vice according to any combination of claims 78 through 83, wherein the device is a personal computing device. [0302] Example 85: A device according to any combination of Examples 78 through 84, wherein the device is further configured to transmit the mapping

[0303] Example 86: A device according to any combination of Examples 78 through 85, wherein the device is further configured to receive media data from another device.

[0304] Example 87: A device according to Example 86, wherein the device is further configured to transmit tire one or more settings

[0305] Example 88: A device according to any combination of Examples 78 through 87, wherein the device is further configured to: provide a second user interface identifying at least one additional control indicator and at least one additional marker; detect an adjustment to the at least one additional marker; and update the one or more settings in response to detecting the adjustment to the at least one additional marker. [0306] Example 89: A device according to any combination of Examples 78 through 88, wherein the device is further configured to: determine a hearing threshold step-size that corresponds to a particular marker value adjustment; and detect an additional adjustment to the first marker that corresponds to a change in hearing threshold that is less than, equal to, or greater than the hearing threshold step-size.

[0307] Example 90: A de vice according to any combination of Examples 78 through 89, wherein the initial marker value is zero and wherein the first adjusted marker value corresponds to a X-decibel change in hearing threshold, where “X” corresponds to a predetermined decibel value.

[0308] Example 91: A device according to any combination of Examples 78 through 90, wherein the device is further configured to apply a transfer function to the one or more settings to determine a second set of one or more settings.

[0309] Example 92: A device according to Example 91, wherein the device is further configured to determine a one or more characteri stics of incom ing audio, wherein the transfer function is based at least in part on the one or more characteristics.

[0310] Example 93: A device according to any combination of Examples 78 through 92, wherein the frequency band is delimited as frequencies of sounds that a user is likely to encounter in a particular environment.

[0311] Example 94: A device according to any combination of Examples 78 through 93, wherein the control indicator is represented as an interactive graphical unit presented to a user via the user interface, wherein the marker is configured to be slid along the control indicator.

[0312] Example 95: A device according to any combination of Examples 78 through 94, wherein the mapping links marker values to hearing thresholds with respect to the frequency band.

[0313] Example 96: A device according to any combination of Examples 78 through 95, wherein the device is further configured to: estimate, from the mapping, a hearing threshold data point from one or more other hearing threshold data points to identify the hearing threshold.

[0314] Example 97: A device according to any combination of Examples 78 through 96, wherein the device is further configured to: in response to receiving an indication to further adjust the first marker, determine a second adjusted marker value; and update the one or more settings based at least in part on the second adjusted marker value.

[0315] Example 98: A device according to any combination of Examples 78 through 97, wherein the device is further configured to: receive, via the user interface, an indication that the one or more settings are ready to be finalized; and generate an instruction to implement the one or more settings with respect to the hearing instrument. [0316] Example 99: A device according to any combination of Examples 78 through 98, wherein the hearing instrument includes a left hearing instrument, and wherein one or more right hearing instrument settings are determined for a right hearing instrument.

[0317] Example 100: A device according to any combination of Examples 78 through 99, wherein the hearing threshold corresponds to a minimum setting at which a user can perceive sound with respect to the frequency band.

[0318] Example 101: A device according to any combination of Examples 78 through 100, wherein adjusting the first marker results in an adjustment to a plurality of sound parameters of the hearing instrument within the frequency band.

[0319] Example 102: A device according to Example 101, wherein the sound parameters include one or more of: gain, frequency response, and/or compression for the hearing instrument.

[0320] In this disclosure, ordinal terms such as “first,” “second,” “third,” and so on, are not necessarily indicators of positions within an order, but rather may be used to distinguish different instances of the same thing. Examples provided in this disclosure may be used together, separately, or in various combinations. Furthermore, with respect to examples that involve personal data regarding user 104, it may be required that such personal data only be used with the permission of user 104.

[0321] It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.

[0322] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. When implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non -transitory ' or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processing circuits to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. [0323] By way of example, and not limitation, such computer-readable storage media can include RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, cache memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly tenned a computer-readable medium. For example, when instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line, or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair. digital subscriber line, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media

[0324] Functionality described in this disclosure, such as execution of instructions, may be performed by fixed function and/or programmable processing circuitry. Such processing circuitry may include one or more processors, such as one or more DSPs, processing systems, general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term "‘processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could he fully implemented in one or more circuits or logic elements. Processing circuits may be coupled to other components m various ways. For example, a processing circuit may be coupled to other components via an internal device interconnect, a wired or wireless network connection, or another communication medium.

[0325] Although many of the actions herein are described as being performed by processors) 302 of device 300, it should be understood that any of the oilier devices described herein may perform or share in the performance of some or all aspects of the disclosed technology. For example, hearing instrument 200, a separate computing device 300, computing system, or a combination thereof, may perform some or all of the techniques or actions described herein. For example, some or all the techniques described herein may be performed by personal computing device 300 In such instances, personal computing device 300 may store the one or more settings (e.g., profile setings) for subsequent use, retrieval, finalization, and/or implementation. In some examples, personal computing dev ice 300 may transmit data to another device to perform the mapping. In other examples, personal computing device 300 may receive the heating threshold mapping from another device, such as a remote server, and perform the mapping on personal computing device 300. As such, personal computing device 300 may already include the mapping (e.g., stored in a memory device, such as storage device(s) 316 of personal computing device 300, as shown in FIG. 3).

[0326] Although mostly discussed with reference to one hearing instrument 200, it should be understood that the entire configuration process may be performed uniquely for a left hearing instrument 200 and a right hearing instrument 200. In addition, a hearing instrument 200 may be designed so as to be interchangeable between a left and right ear. In such instances, multiple profile settings may be stored for a right ear and a left ear. In any case, the configuration process may be performed separately for the right and left ear of user 104.

[0327] The techniques of this disclosure may be implemented in a wide variety of de vices or apparatuses, including a wireless handset, an integrated circuit (1C) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined m a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. [0328] Various examples have been described. These and other examples are within the scope of the following claims.