Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUTOMATIC SELECTION OF HEARING INSTRUMENT COMPONENT SIZE
Document Type and Number:
WIPO Patent Application WO/2021/101845
Kind Code:
A1
Abstract:
An example method includes capturing, via one or more sensors of a computing system, a representation of an ear of a user; determining, based on the representation, a value of a measurement of the ear of the user; and selecting, based on the value of the measurement, a length of a wire or tube of a hearing instrument to be worn on the ear of the user.

Inventors:
XU JINGJING (US)
BURWINKEL JUSTIN (US)
Application Number:
PCT/US2020/060765
Publication Date:
May 27, 2021
Filing Date:
November 16, 2020
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
STARKEY LABS INC (US)
International Classes:
H04R25/00
Domestic Patent References:
WO2013149645A12013-10-10
WO2020074061A12020-04-16
Foreign References:
DE102017128117A12019-05-29
GB2569817A2019-07-03
EP0158391A11985-10-16
US201962937566P2019-11-19
Attorney, Agent or Firm:
ROSENBERG, Brian M. (US)
Download PDF:
Claims:
WHAT IS CLAIMED IS:

1. A method comprising: capturing, via one or more sensors of a computing system, a representation of an ear of a user; determining, based on the representation, a value of a measurement of the ear of the user; and selecting, based on the value of the measurement, a length of a wire or tube of a hearing instrument to be worn on the ear of the user.

2. The method of claim 1, wherein the representation of the ear comprises an image of the ear of the user.

3. The method of claim 2, wherein the representation of the ear comprises multiple images of the ear of the user.

4. The method of claim 3, wherein the multiple images each have different capture characteristics.

5. The method of any of claims 1-4, wherein the measurement is a distance between a superior auricular root of the ear and a top of a canal of the ear.

6. The method of any of claims 1-5, further comprising: determining, based on the representation of the ear, a pigment of a skin of the user; and selecting, based on the determined pigment of the skin of the user, a color of the component.

7. The method of claim 6, wherein selecting the color of the component comprises selecting the color of the component from a pre-determined set of component colors.

8. The method of claim any of claims 6-7, further comprising: outputting, by the computing system and to a remote server device, an indication of the selected color of the component. 9. The method of any of claims 1 -8, further comprising: outputting, by the computing device and to a remote server device, an indication of the selected size of the component.

10. The method of any of claims 1-9, wherein capturing the representation of the ear of the user comprises: outputting, by the computing system and for display at a display device connected to the computing device, live image data captured by an image sensor of the one or more sensors of the computing system; outputting, by the computing system and for display at the display device, one or more graphical guides configured to assist the user in facilitating the capture of the representation of the ear of the user, wherein the one or more graphical guides are output for display on the live image data; and capturing, while the live image data and the graphical guides are being displayed by the display device, the representation of the ear of the user via at least the image sensor.

11. The method of claim 12, wherein the one or more graphical guides include one or more of: a graphical representation of an ear; anatomy markers; and a graphical representation of the hearing instrument.

12. The method of any of claims 10-11, wherein the display device is included in the same device as the one or more sensors.

13. The method of any of claims 10-11, wherein the display device is included a different device than the one or more sensors. 14. The method of any of claims 1-13, wherein the one or more sensors comprise one or more of: one or more depth sensors; one or more structured light sensors; and one or more time of flight sensors.

15. A computing system comprising; one or more sensors; and one or more processors that are implemented in circuitry and configured to perform the method of any combination of claims 1-14.

16. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a computing system to perform the method of any combination of claims 1-14.

Description:
AUTOMATIC SELECTION OF HEARING INSTRUMENT COMPONENT SIZE

[0001] This application claims the benefit of U.S. Provisional Patent Application 62/937,566, filed November 19, 2019, the entire content of which is incorporated by reference.

TECHNICAL FIELD

[0002] This disclosure relates to hearing instruments.

BACKGROUND

[0003] Hearing instruments are devices designed to be worn on, in, or near one or more of a user’s ears. Common types of hearing instruments include hearing assistance devices (e.g., “hearing aids”), earbuds, headphones, hearables, cochlear implants, and so on.

SUMMARY

[0004] This disclosure describes techniques for using a computing device to automatically select at least a size of a component of a hearing instrument to be worn on an ear of a user based on scans of the ear of the intended user. For instance, a user may hold an object having known dimensions (e.g., size) near their ear while one or more sensors of a mobile computing device may capture an image of the user’s ear (e.g., a representation of the user’s ear) with the object. Based on the dimensions of the object, the computing device may determine a value of a measurement of the user’s ear (e.g., a distance between a top of the ear (e.g., a superior auricular root, or helix root, etc.) and a top of a canal of the ear). The computing device may select a size of a component of a hearing instrument to be worn on the ear (e.g., a wire or tube length) based on the determined value of the measurement.

[0005] In some examples, in addition to or in place of using the object having known dimensions, the mobile computing device may capture the representation to include more than just dimensionless image data. For instance, the mobile computing device may capture the representation using one or more dimension capturing sensors (e.g., depth sensors, one or more structured light sensors, and/or one or more time of flight sensors). Using the representation captured by the dimension capturing sensors, the mobile computing device may determine the value of the measurement of the user’s ear even were the object having known dimensions is not present.

[0006] The details of one or more aspects of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques described in this disclosure will be apparent from the description, drawings, and claims.

BRIEF DESCRIPTION OF DRAWINGS

[0007] FIG. 1 is a conceptual diagram illustrating an example system that includes one or more hearing instrument(s), in accordance with one or more techniques of this disclosure.

[0008] FIG. 2 is a conceptual diagram illustrating an image of an ear of a user captured by a computing system, in accordance with one or more techniques of this disclosure. [0009] FIG. 3 is a block diagram illustrating example components of a computing system, in accordance with one or more aspects of this disclosure [0010] FIG. 4 is a conceptual diagram illustrating a graphical user interface that may be displayed by a computing system to facilitate the capture of a representation of an ear of a user, in accordance with one or more techniques of this disclosure.

[0011] FIG. 5 is a flowchart illustrating an example operation of a processing system for customization of hearing instruments, in accordance with one or more aspects of this disclosure.

DETAILED DESCRIPTION

[0012] FIG. 1 is a conceptual diagram illustrating an example system 100 that includes computing system 108 configured to automatically select at least a size of a component of a hearing instrument to be worn on an ear of a user, in accordance with one or more techniques of this disclosure. As shown in FIG. 1, system 100 may include hearing instruments 102A and 102B (collectively ‘hearing instruments 102”), computing system 108, and ordering system 120.

[0013] Hearing instruments 102 may comprise one or more of various types of devices that are configured to provide auditory stimuli to a user and that are designed for wear and/or implantation at, on, or near an ear of the user. Hearing instruments 102 may be worn, at least partially, in the ear canal or concha. One or more of hearing instruments 102 may include behind the ear (BTE) components that are worn behind the ears of user 104. In some examples, one or more of hearing instruments 102 is able to provide auditory stimuli to user 104 via a bone conduction pathway.

[0014] In any of the examples of this disclosure, each of hearing instruments 102 may comprise a hearing instrument. Hearing instruments include devices that help a user hear sounds in the user’s environment. Example types of hearing instruments may include hearing aid devices, Personal Sound Amplification Products (PSAPs), cochlear implant systems (which may include cochlear implant magnets, cochlear implant transducers, and cochlear implant processors), and so on. In some examples, hearing instruments 102 are over-the-counter (OTC), direct-to-consumer (DTC), or prescription devices. Furthermore, in some examples, hearing instruments 102 include devices that provide auditory stimuli to the user that correspond to artificial sounds or sounds that are not naturally in the user’s environment, such as recorded music, computer-generated sounds, or other types of sounds. For instance, hearing instruments 102 may include so- called ‘'hearables,” earbuds, earphones, or other types of devices. Some types of hearing instruments provide auditory stimuli to the user corresponding to sounds from the user’s environmental and also artificial sounds.

[0015] In some examples, one or more of hearing instruments 102 includes a housing or shell that is designed to be worn in the ear for both aesthetic and functional reasons and encloses the electronic components of the hearing instrument. Such hearing instruments may be referred to as in-the-ear (ITΕ), in-the-canal (ITC), completely-in-the-canal (CIC), or invisible-in-the-canal (IIC) devices. In some examples, one or more of hearing instruments 102 may be behind-the-ear (BTE) devices, which include a housing worn behind the ear contains all of the electronic components of the hearing instrument, including the receiver (i.e., the speaker). The receiver conducts sound to the inside of the ear via an audio tube. In some examples, one or more of hearing instruments 102 may be receiver-in-canal (RIC) hearing-instrument, which include a housing worn behind the ear that contains electronic components and a housing worn in the ear canal that contains the receiver.

[0016] Hearing instruments 102 may implement a variety of features that help user 104 hear better. For example, hearing instruments 102 may amplify the intensity of incoming sound, amplify the intensity of certain frequencies of the incoming sound, or translate or compress frequencies of the incoming sound. In another example, hearing instruments 102 may implement a directional processing mode in which hearing instruments 102 selectively amplify sound originating from a particular direction (e.g., to the front of the user) while potentially fully or partially canceling sound originating from other directions. In other words, a directional processing mode may selectively attenuate off-axis unwanted sounds. The directional processing mode may help users understand conversations occurring in crowds or other noisy environments. In some examples, hearing instruments 102 may use beamforming or directional processing cues to implement or augment directional processing modes.

[0017] While shown as two separate instruments, in some instances, such as when user 104 has unilateral hearing loss, user 104 may wear a single hearing instrument. In other instances, such as when user 104 has bilateral hearing loss, the user may wear two hearing instruments, with one hearing instrument for each ear of the user.

[0018] Hearing instruments 102 may include one or more components that are available (e.g., from a manufacturer of hearing instruments 102) in a variety of sizes and/or colors. As one example, as discussed above, a BTE hearing instrument of hearing instruments 102 may include an audio tube that conducts sound from a receiver located in a housing worn behind the ear to the inside of the ear. The audio tube may be available in a variety of lengths (e.g., to provide enough length to reach from the receiver to the inside of the ear, without too much slack) and/or a variety of colors (e.g., to match a wearer’s skin tone). As another example, a RIC hearing instrument of hearing instruments 102 may include a wire that carries electrical signals from a housing worn behind the ear to a housing worn in the ear canal that contains a receiver. The wire may be available in a variety of lengths (e.g., to provide enough length to reach from the behind the ear housing to the in-ear housing, without too much slack) and/or a variety of colors (e.g., to match a wearer’s skin tone).

[0019] Recent rulemaking from the U.S. Food and Drug Administration (FDA) will begin a new era of providing over-the-counter (OTC) and direct-to-consumer (DTC) hearing aids to hearing-impaired individuals. This presents a challenge of how to ensure users are able to appropriately select sizing of hearing instrument components without specialized equipment and a professional. Without professional guidance, users may select incorrectly sized and/or colored hearing instrument components, which may result in poor performance of the hearing instrument that may leave users frustrated and unsatisfied. [0020] In accordance with one or more techniques of this disclosure, computing system 108 may automatically select at least a size of a component of one or both of hearing instruments 102 based on scans of one or both ears of user 104. For instance, user 104 may hold an object having known dimensions (e.g., size) near their ear while one or more sensors of computing system 108 may capture an image of user 104’s ear with the object. Based on the dimensions of the object, computing system 108 may determine a value of a measurement of user 104’s ear (e.g., a distance between a top of the ear and a top of a canal of the ear). Computing system 108 may select a size of a component of one or both of hearing instruments 102 to be worn on the ear (e.g., a wire length of a RIC hearing instrument or a tube length of a BTE hearing instrument) based on the determined value of the measurement. In this way, computing system 108 may improve the accuracy of component size (e.g., wire or tube length) selection without requiring professional guidance.

[0021] FIG. 2 is a conceptual diagram illustrating an image 150 of an ear of a user captured by a computing system, in accordance with one or more techniques of this disclosure. As shown in FIG. 2, image 150 depicts ear 160 and object 130. Ear 160 may be considered to be an ear of user 104 of FIG. 1.

[0022] Referring to both FIGS. 1 and 2, a camera of computing system 108 may capture image 150 (e.g., a representation of an ear of user 104) while user 104 holds object 130 near ear 160. In the example of FIG. 2, object 130 may be a coin (e.g., a quarter) however any object having known dimensions may be used.

[0023] In some examples, the camera (or multiple cameras included in computing system 108, such as a smartphone that includes multiple cameras having different properties, such as focal lengths) may bracket one or more of the ISO, shutter speed, or aperture to provide at least two images of different capture characteristics (e.g., two images of different light exposure). In some examples, computing system 108 may utilize High Dynamic Range (HDR) images to make measurements relative to aspects of the subject’s ear canal and external ear (pinna) geometries, beyond the opening/aperture of the ear canal. In some examples, computing system 108 may purposefully use a wide aperture which allows for a narrow depth of field. By bracketing focal points and/or focal distances, the system could judge depth of specific ear features based upon the sharpness of elements of the image at those various different focal depths. [0024] Computing system 108 may process image 150 to determine relative dimensions of object 130. For instance, computing system 108 may determine that a relative dimension object 130 (e.g., D ref ) is 375 pixels. Computing system 108 may obtain the dimensions of object 130 and determine an image dimension scale based on the known dimensions of object 130 and the determined relative dimensions of object 130. For instance, computing system 108 may obtain (e.g., from a memory device) the diameter of object 130 as 1.25 inches. Computing system 108 may determine the image dimension scale by dividing D ref by the obtained known diameter of object 130 to determine that the image is 300 pixels per inch (e.g., 375 pixels /1.25 inches).

[0025] Computing system 108 may determine a value of a measurement of the ear of the user. For instance, computing system 108 may further process image 150 to determine relative dimensions of a measurement of ear 160 (e.g., determine a value of a measurement of an ear of the user). Computing system 108 may calculate a distance between a top of ear 160 and a top of a canal of ear 160 (e.g., Dear) as 360 pixels. Computing system 108 may scale the relative dimensions of the measurement of ear 160 by the determined image dimension scale to determine an absolute value of the measurement. For instance, computing system 108 may divide Dear by the determined image dimension scale to determine that the absolute value of the distance between the top of ear 160 and the top of the canal of ear 160 is 1.2 inches (e.g., 360 pixels / 300 pixels per inch).

[0026] Computing system 108 may select a size of a component of a hearing instrument based on the determined value of the measurement. For instance, computing system 108 may obtain (e.g., from a memory device) a look-up table of available lengths of a component (e.g., a wire or a tube) mapped to values of the measurement. The look up table may specify five different lengths with corresponding ranges of values of the measurement. Computing system 108 may select the component length based on the look-up table and the determined value of the measurement. As one example, computing system 108 may determine in which range of values in the look-up table the determined value of the measurement for ear 160 resides and select the component length corresponding to the determined range.

[0027] Computing system 108 may output an indication of the selected size of the component. As one example, computing system 108 may display a graphical user interface indicating the selected size to user 104. As another example, computing system 108 may output a message (e.g., via network 114, which may be the Interet) including the indication of the selected size to a remote server device, such as ordering system 120 of FIG. 1.

[0028] Ordering system 120 may receive the message indicating the selected size and perform one or more actions to facilitate an order of hearing instruments 104. For instance, ordering system 120 may facilitate an order of hearing instruments 104 with component having the selected size.

[0029] FIG. 3 is a block diagram illustrating example components of computing system 200, in accordance with one or more aspects of this disclosure. FIG. 3 illustrates only one particular example of computing system 200, and many other example configurations of computing system 200 exist. Computing system 200 may be any computing system capable of performing the operations described herein. Examples of computing system 200 include, but are not limited to laptop computers, cameras, desktop computers, kiosks, smartphones, tablets, servers, and the like.

[0030] As shown in the example of FIG. 3, computing system 200 includes one or more processors) 202, one or more communication unit(s) 204, one or more input device(s) 208, one or more output device(s) 210, a display screen 212, a power source 214, one or more storage device(s) 216, and one or more communication channels 218. Computing system 200 may include other components. For example, computing system 200 may include physical buttons, microphones, speakers, communication ports, and so on. Communication channel(s) 218 may interconnect each of components 202, 204, 208, 210, 212, and 216 for inter-component communications (physically, communicatively, and/or operatively). In some examples, communication channel(s) 218 may include a system bus, a network connection, an inter-process communication data structure, or any other method for communicating data. Power source 214 may provide electrical energy to components 202, 204, 208, 210, 212 and 216.

[0031] Storage device(s) 216 may store information required for use during operation of computing system 200. In some examples, storage device(s) 216 have the primary purpose of being a short term and not a long-term computer-readable storage medium. Storage device(s) 216 may be volatile memory and may therefore not retain stored contents if powered off. Storage device(s) 216 may further be configured for long-term storage of information as non-volatile memory space and retain information after power on/off cycles. In some examples, processors) 202 on computing system 200 read and may execute instructions stored by storage device(s) 216. [0032] Computing system 200 may include one or more input device(s) 208 that computing device 200 uses to receive user input. Examples of user input include tactile, audio, video user input, and gesture or motion (e.g., a user may shake or move computing system 200 in a specific pattern). Input device(s) 208 may include presence- sensitive screens, touch-sensitive screens, mice, keyboards, voice responsive systems, microphones or other types of devices for detecting input from a human or machine. [0033] As shown in FIG. 3, input devices 208 may include one or more sensors 209, which may be configured to sense various parameters. For instance, sensors 209 may be capable of capturing a representation of an ear of a user. Examples of sensors 209 include, but are not limited, to cameras (e.g., RGB cameras), depth sensors, structured light sensors, and time of flight sensors.

[0034] Communication unit(s) 204 may enable computing system 200 to send data to and receive data from one or more other computing devices (e.g., via a communications network, such as a local area network or the Interet). For instance, communication unit(s) 204 may be configured to receive source data exported by hearing instrument(s) 102, receive comment data generated by user 104 of hearing instrument(s) 102, receive and send request data, receive and send messages, and so on. In some examples, communication unit(s) 204 may include wireless transmitters and receivers that enable computing system 200 to communicate wirelessly with the other computing devices. Examples of communication unit(s) 204 may include network interface cards, Etheret cards, optical transceivers, radio frequency transceivers, or other types of devices that are able to send and receive information. Other examples of such communication units may include BLUETOOTH™, 3G, 4G, LTE, 5G, and WI-FI™ radios, Universal Serial Bus (USB) interfaces, etc. Computing system 200 may use communication unit(s) 204 to communicate with one or more hearing instruments (e.g., hearing instrument 102 (FIG. 1)). Additionally, computing system 200 may use communication unit(s) 204 to communicate with one or more other remote devices (e.g., ordering system 129 (FIG. 1)). In some examples, computing system 200 may communicate with the ordering system via hearing aid fitting software (e.g., published by a manufacturer of hearing instruments 102). As such, it is possible for computing system 200 to include a hearing instrument programming device that is configured to transfer the size information to the ordering system. Examples of technologies that could be used by hearing instruments (and thus their programming device) could include NFMI, other forms of magnetic induction (telecoil, GMR, TMR), 900 MHz, 2.4GHz, etc. [0035] Output device(s) 210 may generate output. Examples of output include tactile, audio, and video output. Output device(s) 210 may include presence-sensitive screens, sound cards, video graphics adapter cards, speakers, liquid crystal displays (LCD), or other types of devices for generating output.

[0036] Processor(s) 202 may read instructions from storage device(s) 216 and may execute instructions stored by storage device(s) 216. Execution of the instructions by processors) 202 may configure or cause computing system 200 to provide at least some of the functionality ascribed in this disclosure to computing system 200. As shown in the example of FIG. 3, storage device(s) 216 include computer-readable instructions associated with operating system 220, application modules 222A-222N (collectively, “application modules 222”), and a customization application 224.

[0037] Execution of instructions associated with operating system 220 may cause computing system 200 to perform various functions to manage hardware resources of computing system 200 and to provide various common services for other computer programs. Execution of instructions associated with application modules 222 may cause computing device 200 to provide one or more of various applications (e.g., “apps,” operating system applications, etc.). Application modules 222 may provide particular applications, such as text messaging (e.g., SMS) applications, instant messaging applications, email applications, social media applications, text composition applications, and so on.

[0038] Execution of instructions associated with customization application 224 by processor(s) 202 may cause computing system 200 to perform one or more of various functions. For example, execution of instructions associated with customization application 224 may cause computing device 200 to perform one or more actions to automatically determine a size and/or a color of a component of a hearing instrument (e.g., a hearing assistance device) based on a representation of an ear of a user of the hearing instrument (e.g., as captured by sensors 209).

[0039] In operation, a user may hold computing system 200 and/or position themselves such that an ear of the user is in a field of view of sensors 209. Customization application 224 may be executed by processors 202 to cause sensors 209 to capture a representation of the ear of the user, and determine, based on the representation, a value of a measurement of the ear of the user.

[0040] In some examples, customization application 224 may utilize augmented reality (AR) technology, or another graphical processing technology, to assist in capturing the representation of the ear. As one example, computing system 200 may output various guides to assist the user in facilitating the capture of the representation. For instance, customization application 224 may output, for display at a display device connected to the computing system (e.g., display screen 212), live image data captured by an image sensor of sensors 209 (e.g., display a live-feed of the image sensor on display screen 212). By causing the display of the live image data, the user of computing system 200 may be able to better position their ear in a field of view of sensors 209, which may result in the capture of a higher quality representation of the ear.

[0041] In some examples, customization application 224 may output, for display at the display device (e.g., display screen 212), one or more graphical guides configured to assist the user in facilitating the capture of the representation of the ear of the user. The graphical guides may include anatomy markers and/or a graphic of an ear (e.g., as shown in FIG. 4). In some examples, customization application 224 may output the graphical guides for display on the live image data (e.g., as a layer overlaid upon the live image data). Customization application 224 may cause sensors 209 to capture, while the live image data and the graphical guides are being displayed by the display device, the representation of the ear of the user (e.g., via at least the image sensor). [0042] FIG. 4 is a conceptual diagram illustrating a graphical user interface that may be displayed by a computing system to facilitate the capture of a representation of an ear of a user, in accordance with one or more techniques of this disclosure. Graphical user interface (GUI) 400 may be displayed by a display device of a computing system, such as display screen 212 of computing system 200 of FIG. 2. As shown in FIG. 4, GUI 400 includes live image data 402 (including ear 160), graphical guides 405, 410, and 415. As discussed above, graphical guides may include anatomy markers and/or a graphical representation of an ear. In general, an anatomy marker may be any marker that is displayed to correspond to a particular piece of anatomy. Graphical guides 405 and 410 are examples of anatomy markers. In particular, graphical guide 405 is a top of canal (e.g., top of ear canal, superior portion of the canal aperture) marker and graphical guide 410 is atop of ear marker. Graphical guide 415 is an example graphic of an ear. Graphical guides 405/410/415 are merely examples and other graphical guides may be used in other examples. For instance, a graphical representation of a hearing instrument of a component thereof (e.g., the same model being customized) may be displayed to facilitate the capture. [0043] While one or more of graphical guides 405/410/415 are displayed, a user of computing system 200 may align their ear, or features of their ear, with corresponding guides. For instance, the user may move themselves or move computing system 200 so as to align graphical guide 405 with the top of their ear canal and align graphical guide 410 with the top of their ear. Once such alignment is achieved, computing system 200 may capture the representation of ear 160 and determine the size and/or color of the component as described herein.

[0044] In some examples, computing system 200 may perform one or more actions to make it easier for a user to capture a representation of their own ear. As one example, computing system 200 may mirror at least a portion of what is displayed at display screen 212 (e.g., GUI 400) on a display of another device. As another example, computing system 200 may cause a display of another device to display written and/or symbolic instruction to enable a user to align anatomy of their ear with graphical guides. As another example, computing system 200 may output audible instructions to enable a user to align anatomy of their ear with graphical guides. As another example, computing system 200 may output haptic feedback to enable a user to align anatomy of their ear with graphical guides.

[0045] While discussed above as being performed by the user, it is noted that the techniques of this disclosure may allow for another person to operate computing system 200 to capture the representation of the ear of the user. For instance, where computing system 200 includes a smartphone, the user may provide the smartphone to another person who may operate the smartphone to capture the representation of the ear of the user.

[0046] Returning to FIG. 3, in some examples, the representation of the ear may be in the form of dimensionless image data. For instance, an image sensor (e.g., a camera) of sensors 209 may capture a dimensionless image of the user’s ear. In some examples, such as where the representation of the ear is dimensionless, customization application 224 may determine the value of the measurement based on dimensions of an object of known dimensions in the image (e.g., object 130 of FIG. 2). In some examples, customization application 224 may estimate dimensions of the image using data measured by sensors other than the image sensor of sensors 209. For instance, customization application 224 may utilize inertial data captured by a motion sensor (e.g., inertial measurement unit (IMU), accelerometer, gyroscope, barometer, etc.) position data captured by a global positioning sensor (GPS), and/or directional data cultured by a magnetometer of input devices 208. As one example, customization application 224 may combine the inertial data with multiple images captured by the image sensor to create a three-dimensional model of the user’s ear.

[0047] In some examples, the representation of the ear of the user may include data in addition to or in place of the dimensionless image data. For instance, a structured light sensor (e.g., one or more cameras and one or more projectors) of sensors 209 may capture an image of the user’s ear with a known pattern projected on the user’s ear. Customization application 224 may determine the value of the measurement based on the known pattern relative to the user’s ear.

[0048] Regardless of the way in which customization application 224 determines the value of the measurement, customization application 224 may select a size of a component of a hearing instrument based on the determined value of measurement. In some examples, customization application 224 may select the size from a predetermined set of sizes. For instance, customization application 224 may obtain, from storage devices 216, a look-up table of available lengths of a component (e.g., a wire or a tube) mapped to values of the measurement. The look up table may specify five different lengths with corresponding ranges of values of the measurement. Customization application 224 may select the component length based on the look-up table and the determined value of the measurement. As one example, customization application 224 may identify a range of values in the look-up table in-which the determined value of the measurement resides and select the component length corresponding to the identified range.

[0049] Additionally or alternatively to customizing a size of a component of a hearing instrument, it may be desirable for a user to be able to customize a color of the component (or another different component). Currently, when a user with dark skin complexion desires a darker component (e.g., receiver wire/tube), the user may utilize dye to change a color (e.g., darken) the component. There is currently not a method for offering or producing customized color components for the user.

[0050] In accordance with one or more techniques of this disclosure, customization application 224 may be executable by processors 202 to select a color of the component of the hearing instrument. For instance, based on a representation of the ear of the user (which may be the same or different than the representation used to select the size), customization application 224 may determine a pigment of a skin of the user. Customization application 224 may select a color of the component based on the determined pigment. In some examples, customization application 224 may select the color from a pre -determined set of component colors. For instance, customization application 224 may obtain, from storage devices 216, a look-up table of available colors of a component (e.g., a wire or a tube) mapped to values of pigments. The look up table may specify five different colors with corresponding ranges of pigment, customization application 224 may select the component color based on the look-up table and the determined pigment of the user. As one example, customization application 224 may identify a range of values in the look-up table in-which the determined pigment resides and select the component color corresponding to the identified range. In this way, customization application 224 may enable users to obtain color-customized hearing instrument components that more accurately match their skin tone without having to utilize dyes at home.

[0051] FIG. 5 is a flowchart illustrating an example operation of a processing system for customization of hearing instruments, in accordance with one or more aspects of this disclosure. The flowcharts of this disclosure are provided as examples. Other examples may include more, fewer, or different actions; or actions may be performed in different orders or in parallel. Although FIG. 5 and other parts of this disclosure are discussed as being performed with respect to hearing instruments 102, it is to be understood that much of this discussion is applicable in cases where user 104 only uses a single hearing instrument. In the example of FIG. 5, a computing system (e.g., computing system 108 of FIG. 1 or computing system 200 of FIG. 3) may perform actions (500) through (512) to customize hearing instruments 102.

[0052] Computing system 200 may capture a representation of an ear of a user (502). For instance, customization application 224 may cause one or more of sensors 209 of computing system 200 to capture a dimensioned or dimensionless representation of the ear of user 104 on which a hearing instrument of hearing instruments 102 is to be worn. As discussed above, in some examples, computing system 200 may output various guides to assist the user in facilitating the capture of the representation (e.g., as shown in FIG. 4).

[0053] Computing system 200 may determine, based on the representation, a value of a measurement of the ear of the user (504). For instance, customization application 224 may process the representation to determine a distance between a top of the ear and a top of a canal of the ear (e.g., Dear of FIG. 2). [0054] Computing system 200 may select, based on the value of the measurement, a size of a component of a hearing instrument to be worn on the ear of the user (506). For instance, customization application 224 may select a size, from a pre-determined set of component sizes, of the component. As discussed above, in some examples, the size of the component may be a length of a wire or tube. Other examples could include: (a) depth from aperture of ear canal to the first bend of the ear canal, which may be visible to computing system 200 (e.g., in order to give a more customized depth of insertion and orientation of sound-port/speaker/receiver), (b) size of the concha bowl, which could be measured in lengths between various different anatomical markers of the ear (e.g., to provide a better fit of earmolds and in-the-ear devices), (c) distance between pinna and side of head (e.g., to allow computing system 200 to determine optimal width of a behind-the-ear or over-the-ear instrument, or to optimize the coupling of the aforementioned + frames of eye glasses, etc.).

[0055] Computing system 200 may determine, based on the representation, a pigment of a skin of the user (508). For instance, where the representation of the ear includes a color (e.g., RGB, CMYK, etc.) image of the ear, customization application 224 may determine the pigment based in statistics related to color of samples of the image (e.g., an average or other such statistical calculation). In some examples, the image may include an object of known color (or colors), which customization application 224 may utilize to calibrate the pigment determination process. For instance, similar to object 130 of FIG. 2, a user may hold an object of known color near their ear while computing system 200 captures the representation of the ear. In some examples, the object may be the same as object 130 (e.g., object 130 may be of both known size and known color).

In some examples, the image sensor/camera of computing system 200 may be calibrated or assigned a custom white balance value (either before or after capturing the representation).

[0056] Computing system 200 may select, based on the pigment, a color of the component of the hearing instrument to be worn on the ear of the user (510). For instance, customization application 224 may select a color, from a pre-determined set of component colors, of the component. As discussed above, in some examples, the color of the component may be a color of a wire or tube.

[0057] Computing system 200 may output, to a remote device, an indication of the selected size and/or an indication of the selected color of the component (512). For instance, customization application 224 may cause communication units 204 to output a message (e.g., via network 114, which may be the Interet) including the indication of the selected size and/or color to a remote server device, such as ordering system 120 of FIG. 1. As discussed above, ordering system 120 may receive the message indicating the selected size and perform one or more actions to facilitate an order of hearing instruments 104. For instance, ordering system 120 may facilitate an order of hearing instruments 104 with component having the selected size and/or the selected color. [0058] In some examples, the selected size and/or selected color may be a suggested size and/or a suggested color. For instance, computing system 200 may display a graphical user interface indicating the selected size and/or selected color to user 104 (e.g., via display screen 212). The user may provide user input to accept or modify the selected size and/or selected color. After the user has approved the size and/or color selections, computing system 200 may output, to a remote device, the indication of the selected size and/or an indication of the selected color of the component.

[0059] In this disclosure, ordinal terms such as “first,” “second,” “third,” and so on, are not necessarily indicators of positions within an order, but rather may be used to distinguish different instances of the same thing. Examples provided in this disclosure may be used together, separately, or in various combinations. Furthermore, with respect to examples that involve personal data regarding a user, it may be required that such personal data only be used with the permission of the user.

[0060] It is to be recognized that depending on the example, certain acts or events of any of the techniques described herein can be performed in a different sequence, may be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the techniques). Moreover, in certain examples, acts or events may be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors, rather than sequentially.

[0061] In one or more examples, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over, as one or more instructions or code, a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer- readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processing circuits to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium. [0062] By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, cache memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transient media, but are instead directed to non-transient, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer- readable media.

[0063] Functionality described in this disclosure may be performed by fixed function and/or programmable processing circuitry. For instance, instructions may be executed by fixed function and/or programmable processing circuitry. Such processing circuitry may include one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules. Also, the techniques could be fully implemented in one or more circuits or logic elements. Processing circuits may be coupled to other components in various ways. For example, a processing circuit may be coupled to other components via an interal device interconnect, a wired or wireless network connection, or another communication medium.

[0064] The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a hardware unit or provided by a collection of interoperative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware. [0065] Various examples have been described. These and other examples are within the scope of the following claims.