Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
AUDITORY REHABILITATION FOR TELEPHONE USAGE
Document Type and Number:
WIPO Patent Application WO/2022/195379
Kind Code:
A1
Abstract:
Presented herein are auditory rehabilitation techniques facilitating independent usage of a telephone by a recipient of a hearing device. More specifically, the auditory rehabilitation techniques presented herein are configured to develop the cognitive resources of a recipient that are needed by the recipient to process, in real-time, stimulation signals having a degraded quality that is typical of telephonic signals, and in the absence of visual cues. In addition, the auditory rehabilitation techniques presented herein are configured to develop the cognitive resources of a recipient that are need to formulate, in real-time, spoken responses to the stimulation signals.

Inventors:
OLIVER JANETTE (AU)
VERMA RISHUBH (AU)
MCDONNELL VIKTORIJA (AU)
Application Number:
PCT/IB2022/051523
Publication Date:
September 22, 2022
Filing Date:
February 21, 2022
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
COCHLEAR LTD (AU)
International Classes:
A61N1/36; A61F11/04; A61N1/05; A61N1/372; H04R25/00
Foreign References:
US20130101095A12013-04-25
US20100304342A12010-12-02
KR101779641B12017-09-18
CN107909523A2018-04-13
CN101751799A2010-06-23
Download PDF:
Claims:
CLAIMS

What is claimed is:

1. One or more non-transitory computer readable storage media comprising instructions that, when executed by a processor of an external device, cause the processor to: deliver, via the external device and a hearing device, one or more predetermined auditory targets to a recipient of the hearing device, wherein the one or more predetermined auditory targets comprise information typically processed via a telephone; deliver, via the external device and the hearing device, one or more pre-recorded conversations to the recipient; and conduct, via the external device and the hearing device, one or more interactive virtual conversations with the recipient.

2. The one or more non-transitory computer readable storage media of claim 1, wherein the instructions operable to deliver the one or more predetermined auditory targets to the recipient comprise instructions operable to: generate one or more sound signals at the external device, wherein each of the one or more sound signals represents a predetermined auditory target and wherein each of the one or more sound signals is delivered to the recipient via the hearing device used by the recipient; and following delivery of each of the one or more sound signals to the recipient, receive an indication of the recipient’s perception of each of the one or more sound signals.

3. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to receive an indication of the recipient’s perception of each of the one or more sound signals comprise instructions operable to: receive the indication of the recipient’s perception via a user interface of the external device.

4. The one or more non-transitory computer readable storage media of claim 3, wherein the instructions operable to receive the indication of the recipient’s perception via a user interface of the external device comprise instructions operable to: following delivery of each of the one or more sound signals to the recipient , display a plurality of possible responses via the user interface; and receiving a selection of one of the plurality of possible responses from the recipient.

5. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to receive an indication of the recipient’s perception of each of the one or more sound signals comprise instructions operable to: receive a spoken indication of the recipient’s perception via a microphone of the external device.

6. The one or more non-transitory computer readable storage media of claims 1 or 2, further comprising instructions that, when executed by a processor, cause the processor to: following receipt of an indication of the recipient’s perception of each of the one or more sound signals, provide the recipient with an indication of whether the recipient’s perception of each of the one or more sound signals was correct.

7. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to deliver each of the one or more pre-recorded conversations to the recipient via the hearing device comprise instructions operable to: generate a sequence of acoustic sound signals at the external device, wherein each of the one or more sequence signals represents an associated pre-recorded conversation and wherein each sequence of sound signals is delivered to the recipient via the hearing device.

8. The one or more non-transitory computer readable storage media of claims 1 or 2, further comprising instructions operate to: following delivery of each of the one or more pre-recorded conversations to the recipient via the hearing device, query the recipient with one or more questions regarding a content of a preceding pre-recorded conversation; and receive, from the recipient, a response to each of the one or more questions.

9. The one or more non-transitory computer readable storage media of claim 8, wherein the instructions operable to query the recipient with one or more questions regarding a content of a preceding pre-recorded conversation comprise instructions operable to: display each of the one or more questions at a user interface of the external device.

10. The one or more non-transitory computer readable storage media of claim 9, wherein the instructions operable to receive, from the recipient, the response to each of the one or more questions comprise instructions operable to: receive a user input via one or more input devices of the user interface.

11. The one or more non-transitory computer readable storage media of claim 9, wherein the instructions operable to receive, from the recipient, the response to each of the one or more questions comprise instructions operable to: receive a spoken response to each of the one or more questions.

12. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to deliver the one or more pre-recorded conversations to the recipient via the hearing device comprise instructions operable to: deliver a first set of pre-recorded conversations to the recipient via the hearing device, wherein each of the pre-recorded conversations in the first set of pre-recorded conversations have approximately a first number of conservational turns and a first content level configured to develop a relevant threshold level of competence.

13. The one or more non-transitory computer readable storage media of claim 12, wherein the instructions operable to deliver the one or more pre-recorded conversations to the recipient via the hearing device further comprise instructions operable to: following delivery of the first set of pre-recorded conversations, deliver a second set of pre-recorded conversations to the recipient via the hearing device, wherein each of the pre recorded conversations in the second set of pre-recorded conversations have at least a second number of conservational turns that are greater than the first number of conversational turns.

14. The one or more non-transitory computer readable storage media of claim 13, wherein the instructions operable to deliver the one or more pre-recorded conversations to the recipient via the hearing device further comprise instructions operable to: following delivery of the second set of pre-recorded conversations, deliver a third set of pre-recorded conversations to the recipient via the hearing device, wherein each of the pre-recorded conversations in the third set of pre-recorded conversations have at least a third number of conservational turns that are greater than the second number of conversational turns.

15. The one or more non-transitory computer readable storage media of claim 13, wherein the instructions operable to deliver the one or more pre-recorded conversations to the recipient via the hearing device further comprise instructions operable to: following delivery of the second set of pre-recorded conversations, deliver a third set of pre-recorded conversations to the recipient via the hearing device, wherein each of the pre-recorded conversations in the third set of pre-recorded conversations include at least one of a negotiation scenario or a clarification scenario.

16. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to conduct each of the one or more interactive virtual conversations with the recipient comprise instructions operable to: generate a first sequence of acoustic sound signals at the external device, wherein the first sequence of acoustic signals represent a first portion of a virtual conversation and wherein the first sequence of acoustic sound signals is delivered to the recipient via the hearing device; receive, from the recipient, a spoken response to the first sequence of acoustic sound signals; dynamically generate, based on the spoken response to the first sequence of acoustic sound signals, a second sequence of acoustic sound signals at the external device, wherein the second sequence of acoustic signals represent a second portion of the virtual conversation and wherein the second sequence of acoustic sound signals is delivered to the recipient via the hearing device; and receive, from the recipient, a spoken response to the second sequence of acoustic sound signals.

17. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to conduct each of the one or more interactive virtual conversations with the recipient comprise instructions operable to conduct a plurality of target virtual conversations, and wherein conducting at least one of the plurality of target virtual conversations comprises: generate a first sequence of acoustic sound signals at the external device, wherein the first sequence of acoustic signals represent a first portion of the at least one target virtual conversation and wherein the first sequence of acoustic sound signals is delivered to the recipient via the hearing device; receive, from the recipient, a spoken response to the first sequence of acoustic sound signals; following receipt of the response to the first sequence of acoustic sound signals, iteratively generate one or more additional sequences of acoustic sound signals at the external device for delivery to the recipient via the hearing device; and iteratively receive additional spoken responses to each of the one or more additional sequence of acoustic sound signals delivered to the recipient via the hearing device, wherein each of the one or more additional sequences of acoustic sound signals is dynamically generated based on an immediately preceding one of the additional spoken responses received from the recipient, and wherein from of the one or more additional sequences of acoustic sound signals represents a portion of the at least one target virtual conversation.

18. The one or more non-transitory computer readable storage media of claim 17, wherein the instructions operable to iteratively generate one or more additional sequences of acoustic sound signals at the external device for delivery to the recipient via the hearing device comprise instructions operable to: dynamically generate at least one of the one or more additional sequences to include at least one clarification scenario.

19. The one or more non-transitory computer readable storage media of claim 17, wherein the instructions operable to iteratively generate one or more additional sequences of acoustic sound signals at the external device for delivery to the recipient via the hearing device comprise instructions operable to: dynamically generate at least one of the one or more additional sequences to include at least one negotiation scenario.

20. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to deliver one or more pre-recorded conversations to the recipient comprise instructions operable to: deliver a plurality of pre-recorded conversations to the recipient with increasing listening complexity.

21. The one or more non-transitory computer readable storage media of claim 20, wherein the instructions operable to deliver the plurality of pre-recorded conversations to the recipient with increasing listening complexity comprise instructions operable to: delivering at least one pre-recorded conversation without background noise; and subsequently delivering at least a second pre-recorded conversation in with introduced background noise.

22. The one or more non-transitory computer readable storage media of claim 20, wherein the instructions operable to deliver the plurality of pre-recorded conversations to the recipient with increasing listening complexity comprise instructions operable to: deliver at least one pre-recorded conversation with a first number of conversational turns; and subsequently deliver at least a second pre-recorded conversation with a second number of conversational turns, wherein the second number of conversational turns is greater than the first number of conversational turns.

23. The one or more non-transitory computer readable storage media of claims 1 or 2, wherein the instructions operable to conduct one or more interactive virtual conversations with the recipient comprise instructions operable to: conduct, with a virtual telephone assistant, a plurality of virtual conversations with the recipient, wherein the plurality of virtual conversations are conducted with at least one of increasing listening complexity or increasing response complexity.

24. A method, comprising: introducing a recipient of a hearing device to a set of predetermined auditory targets; delivering a plurality of pre-recorded conversations to the recipient with increasing listening complexity; and conducting, with a virtual telephone assistant, a plurality of virtual conversations with the recipient, wherein the plurality of virtual conversations are conducted within increasing listening or response complexity.

25. The method of claim 24, wherein the plurality of pre-recorded conversations make use of the predetermined auditory targets introduced to the recipient.

26. The method of claim 24, wherein delivering a plurality of pre-recorded conversations to the recipient with increasing listening complexity comprises: delivering at least one pre-recorded conversation with a first number of conversational turns; and subsequently delivering at least a second pre-recorded conversation with a second number of conversational turns, wherein the second number of conversational turns is greater than the first number of conversational turns.

27. The method of claims 24, 25, or 26, wherein delivering a plurality of pre-recorded conversations to the recipient with increasing listening complexity comprises: delivering at least one pre-recorded conversation without background noise; and subsequently delivering at least a second pre-recorded conversation with introduced background noise.

28. The method of claims 24, 25, or 26, wherein delivering a plurality of pre-recorded conversations to the recipient with increasing listening complexity comprises: delivering at least one pre-recorded conversation without distortion; and subsequently delivering at least a second pre-recorded conversation with introduced distortion.

29. The method of claims 24, 25, or 26, wherein delivering a plurality of pre-recorded conversations to the recipient with increasing listening complexity comprises: delivering at least one pre-recorded conversation; and subsequently delivering at least a second pre-recorded conversation, wherein the at least second pre-recorded conversation newly introduces a clarification scenario.

30. The method of claims 24, 25, or 26, wherein delivering a plurality of pre-recorded conversations to the recipient with increasing listening complexity comprises: delivering at least one pre-recorded conversation; and subsequently delivering at least a second pre-recorded conversation, wherein the at least second pre-recorded conversation newly introduces a negotiation scenario.

31. The method of claim 24, wherein conducting a plurality of virtual conversations with the recipient with increasing listening or response complexity comprises: conducting at least one virtual conversation with a first number of conversational turns; and subsequently conducting at least a second virtual conversation with a second number of conversational turns, wherein the second number of conversational turns is greater than the first number of conversational turns.

32. The method of claims 24 or 31, wherein conducting a plurality of virtual conversations with the recipient with increasing listening or response complexity comprises: conducting at least one virtual conversation without background noise; and subsequently conducting at least a second virtual conversation in with introduced background noise.

33. The method of claims 24 or 31, wherein conducting a plurality of virtual conversations with the recipient with increasing listening or response complexity comprises: conducting at least one virtual conversation without distortion; and subsequently conducting at least a second virtual conversation with introduced distortion.

34. The method of claims 24 or 31, wherein conducting a plurality of virtual conversations with the recipient with increasing listening or response complexity comprises: conducting at least one virtual conversation; and subsequently conducting at least a second virtual conversation, wherein the at least second virtual conversation requires the recipient to verbalize one or more clarifications.

35. The method of claims 24 or 31, wherein conducting a plurality of virtual conversations with the recipient with increasing listening or response complexity comprises: conducting at least one virtual conversation; and subsequently conducting at least a second virtual conversation, wherein the at least second virtual conversation requires the recipient to verbalize one or more negotiations.

36. A system for developing cognitive resources of a recipient of hearing device for telephone usage, comprising: one or network adapters for communication with the hearing device; a memory; a user interface; and one or more processors configured to: train the recipient to extract, in real-time, content of received stimulation signals associated with a degraded signal quality associated with telephones and in the absence of visual cues; and conduct a plurality of virtual conversations with the recipient, wherein the plurality virtual conversations are conducted with increasing levels of listening and response complexity.

37. The system of claim 36, wherein to train the recipient to extract, in real-time, content of received stimulation signals associated with a degraded signal quality associated with telephones and in the absence of visual cues, the one or more processors are configured to: deliver a plurality of sound signals to the recipient via the hearing device, wherein the plurality of sound signals represent a set of predetermined auditory targets.

38. The system of claims 36 or 37, wherein to train the recipient to extract, in real-time, content of received stimulation signals associated with a degraded signal quality associated with telephones and in the absence of visual cues, the one or more processors are configured to: deliver a plurality of pre-recorded conversations to the recipient to the recipient, where the plurality of pre-recorded conversations are delivered with increasing listening complexity.

39. An apparatus, comprising: means for delivering, via an external device and a hearing device, one or more predetermined auditory targets to a recipient of the hearing device, wherein the auditory targets comprise information typically processed via a telephone; means for delivering, via the external device and the hearing device, one or more pre recorded conversations to the recipient; and means for conducting, via the external device and the hearing device, one or more interactive virtual conversations with the recipient.

Description:
AUDITORY REHABILITATION FOR TELEPHONE USAGE

BACKGROUND

Field of the Invention

[oooi] The present invention relates generally to auditory rehabilitation for recipients of hearing devices.

Related Art

[0002] Medical devices have provided a wide range of therapeutic benefits to recipients over recent decades. Medical devices can include internal or implantable components/devices, external or wearable components/devices, or combinations thereof (e.g., a device having an external component communicating with an implantable component). Medical devices, such as traditional hearing aids, partially or fully-implantable hearing prostheses (e.g., bone conduction devices, mechanical stimulators, cochlear implants, etc.), pacemakers, defibrillators, functional electrical stimulation devices, and other medical devices, have been successful in performing lifesaving and/or lifestyle enhancement functions and/or recipient monitoring for a number of years.

[0003] The types of medical devices and the ranges of functions performed thereby have increased over the years. For example, many medical devices, sometimes referred to as “implantable medical devices,” now often include one or more instruments, apparatus, sensors, processors, controllers or other functional mechanical or electrical components that are permanently or temporarily implanted in a recipient. These functional devices are typically used to diagnose, prevent, monitor, treat, or manage a disease/injury or symptom thereof, or to investigate, replace or modify the anatomy or a physiological process. Many of these functional devices utilize power and/or data received from external devices that are part of, or operate in conjunction with, implantable components.

SUMMARY

[0004] In one aspect, one or more non-transitory computer readable storage media are provided. The one or more non-transitory computer readable storage media comprise instructions that, when executed by a processor of an external device, cause the processor to: deliver, via the external device and a hearing device, one or more predetermined auditory targets to a recipient of the hearing device, wherein the auditory targets comprise information typically processed via a telephone; deliver, via the external device and the hearing device, one or more pre-recorded conversations to the recipient; and conduct, via the external device and the hearing device, one or more interactive virtual conversations with the recipient.

[0005] In one aspect, a method is provided. The method comprises: introducing a recipient of a hearing device to a set of predetermined auditory targets; delivering a plurality of pre recorded conversations to the recipient with increasing listening complexity; and conducting, with a virtual telephone assistant, a plurality of virtual conversations with the recipient, wherein the plurality of virtual conversations are conducted within increasing listening or response complexity.

[0006] In another aspect, a system for developing cognitive resources of a recipient of hearing device for telephone usage is provided. The system comprises: one or network adapters for communication with the hearing device; a memory; a user interface; and one or more processors configured to: train the recipient to extract, in real-time, content of received stimulation signals associated with a degraded signal quality associated with telephones and in the absence of visual cues; and conduct a plurality of virtual conversations with the recipient, wherein the plurality virtual conversations are conducted with increasing levels of listening and response complexity.

[0007] In another aspect, an apparatus is provided. The apparatus comprises: means for delivering, via an external device and a hearing device, one or more predetermined auditory targets to a recipient of the hearing device, wherein the auditory targets comprise information typically processed via a telephone; means for delivering, via the external device and the hearing device, one or more pre-recorded conversations to the recipient; and means for conducting, via the external device and the hearing device, one or more interactive virtual conversations with the recipient.

BRIEF DESCRIPTION OF THE DRAWINGS

[0008] Embodiments of the present invention are described herein in conjunction with the accompanying drawings, in which:

[0009] FIG. 1A is a schematic diagram illustrating a cochlear implant system with which aspects of the techniques presented herein can be implemented;

[ooio] FIG. IB is a side view of a recipient wearing a sound processing unit of the cochlear implant system of FIG. 1A;

[ooii] FIG. 1C is a schematic view of components of the cochlear implant system of FIG. 1 A; [0012] FIG. ID is a block diagram of the cochlear implant system of FIG. 1A;

[0013] FIG. 2 is flowchart illustrating example phases of an auditory rehabilitation process for telephone usage, in accordance with certain embodiments presented herein;

[0014] FIG. 3 is a block diagram of an example computing device configured to implement aspects of an auditory rehabilitation process for telephone usage, in accordance with certain embodiments presented herein;

[0015] FIG. 4 is a flowchart of an example method, in accordance with embodiments presented herein;

[0016] FIG. 5 is a flowchart of another example method, in accordance with embodiments presented herein; and

[0017] FIG. 6 is a flowchart of another example method, in accordance with embodiments presented herein,

DETAILED DESCRIPTION

[0018] Presented herein are auditory rehabilitation techniques facilitating independent usage of a telephone by a recipient of a hearing device. More specifically, the auditory rehabilitation techniques presented herein are configured to develop the cognitive resources of a recipient that are needed by the recipient to process, in real-time, stimulation signals having a degraded quality that is typical of telephonic signals, and in the absence of visual cues. In addition, the auditory rehabilitation techniques presented herein are configured to develop the cognitive resources of a recipient that are need to formulate, in real-time, spoken responses to the stimulation signals.

[0019] In general, and as described below, the auditory rehabilitation techniques presented herein use a multi -phase (multi-step) process to develop a recipient’s cognitive resources in a manner that facilitates independent telephone usage. In one or more introductory phases, a recipient is introduced to a set of predetermined set of auditory targets. In one or more secondary phases, the auditory targets are used in a plurality of pre-recorded conversations delivered to the recipient, with increasing listening complexity, in order to develop auditory memory and introduce clarification and negotiation scenarios. In one or more additional phases, a plurality of virtual conversations are conducted with the recipient, with increasing complexity, in order to introduce and reinforce real-time telephonic interactions. [0020] Merely for ease of description, the auditory rehabilitation techniques presented herein are primarily described with reference to a specific implantable medical device, namely a cochlear implant system comprising a cochlear implant. However, it is to be appreciated that the techniques presented herein may also be partially or fully implemented with other types of systems or devices, including other hearing device systems or hearing devices, such as hearing aids, middle ear auditory prostheses, bone conduction devices, direct acoustic stimulators, electro-acoustic prostheses, auditory brain stimulators, combinations or variations thereof, etc. The techniques presented herein may also be implemented by dedicated tinnitus therapy devices and tinnitus therapy device systems. In further embodiments, the presented herein may also be implemented by, or used in conjunction with, vestibular devices (e.g., vestibular implants), visual devices (i.e., bionic eyes), sensors, pacemakers, drug delivery systems, defibrillators, functional electrical stimulation devices, catheters, seizure devices (e.g., devices for monitoring and/or treating epileptic events), sleep apnea devices, electroporation devices, etc. The techniques presented herein may also be partially or fully implemented by consumer devices, such as tablet computers, mobile phones, wearable devices, etc.

[0021] FIGs. 1 A-1D illustrates an example cochlear implant system 102 with which aspects of the techniques presented herein can be implemented. The cochlear implant system 102 comprises an external component 104 and an implantable component 112. In the examples of FIGs. 1A-1D, the implantable component is sometimes referred to as a “cochlear implant.” FIG. 1A illustrates the cochlear implant 112 implanted in the head 141 of a recipient, while FIG. IB is a schematic drawing of the external component 104 worn on the head 141 of the recipient. FIG. 1C is another schematic view of the cochlear implant system 102, while FIG. ID illustrates further details of the cochlear implant system 102. For ease of description, FIGs. 1 A-1D will generally be described together.

[0022] Cochlear implant system 102 includes an external component 104 that is configured to be directly or indirectly attached to the body of the recipient and an implantable component 112 configured to be implanted in the recipient. In the examples of FIGs. 1 A-1D, the external component 104 comprises a sound processing unit 106, while the cochlear implant 112 includes an internal coil 114, an implant body 134, and an elongate stimulating assembly 116 configured to be implanted in the recipient’s cochlea.

[0023] In the example of FIGs. 1A-1D, the sound processing unit 106 is an off-the-ear (OTE) sound processing unit, sometimes referred to herein as an OTE component, that is configured to send data and power to the implantable component 112. In general, an OTE sound processing unit is a component having a generally cylindrically shaped housing 105 and which is configured to be magnetically coupled to the recipient’s head (e.g., includes an integrated external magnet 150 configured to be magnetically coupled to an implantable magnet 152 in the implantable component 112). The OTE sound processing unit 106 also includes an integrated external (headpiece) coil 108 that is configured to be inductively coupled to the implantable coil 114.

[0024] It is to be appreciated that the OTE sound processing unit 106 is merely illustrative of the external devices that could operate with implantable component 112. For example, in alternative examples, the external component may comprise a behind-the-ear (BTE) sound processing unit or a micro-BTE sound processing unit and a separate external. In general, a BTE sound processing unit comprises a housing that is shaped to be worn on the outer ear of the recipient and is connected to the separate external coil assembly via a cable, where the external coil assembly is configured to be magnetically and inductively coupled to the implantable coil 114. It is also to be appreciated that alternative external components could be located in the recipient’s ear canal, worn on the body, etc.

[0025] As noted above, the cochlear implant system 102 includes the sound processing unit 106 and the cochlear implant 112. However, as described further below, the cochlear implant 112 can operate independently from the sound processing unit 106, for at least a period, to stimulate the recipient. For example, the cochlear implant 112 can operate in a first general mode, sometimes referred to as an “external hearing mode,” in which the sound processing unit 106 captures sound signals which are then used as the basis for delivering stimulation signals to the recipient. The cochlear implant 112 can also operate in a second general mode, sometimes referred as an “invisible hearing” mode, in which the sound processing unit 106 is unable to provide sound signals to the cochlear implant 112 (e.g., the sound processing unit 106 is not present, the sound processing unit 106 is powered-off, the sound processing unit 106 is malfunctioning, etc.). As such, in the invisible hearing mode, the cochlear implant 112 captures sound signals itself via implantable sound sensors and then uses those sound signals as the basis for delivering stimulation signals to the recipient. Further details regarding operation of the cochlear implant 112 in the external hearing mode are provided below, followed by details regarding operation of the cochlear implant 112 in the invisible hearing mode. It is to be appreciated that reference to the external hearing mode and the invisible hearing mode is merely illustrative and that the cochlear implant 112 could also operate in alternative modes. [0026] In FIGs. 1 A and 1C, the cochlear implant system 102 is shown with an external device 105, configured to implement aspects of the techniques presented. The external device 105 is a computing device, such as a computer (e.g., laptop, desktop, tablet), a mobile phone, remote control unit, etc. As described further below, the external device 105 comprises a telephone enhancement module that, as described further below, is configured to implement aspects of the auditory rehabilitation techniques presented herein for independent telephone usage. The external device 105 and the cochlear system 102 (e.g., OTE sound processing unit 106 or the cochlear implant 112) wirelessly communicate via a bi-directional communication link 107. The bi-directional communication link 107 may comprise, for example, a short-range communication, such as Bluetooth link, Bluetooth Low Energy (BLE) link, a proprietary link, etc.

[0027] Returning to the example of FIGs. 1A-1D, the OTE sound processing unit 106 comprises one or more input devices 113 that are configured to receive input signals (e.g., sound or data signals). The one or more input devices 113 include one or more sound input devices 118 (e.g., one or more external microphones, audio input ports, telecoils, etc.), one or more auxiliary input devices 119 (e.g., audio ports, such as a Direct Audio Input (DAI), data ports, such as a Universal Serial Bus (USB) port, cable port, etc), and a wireless transmitter/receiver (transceiver) 120 (e.g., for communication with the external device 105). However, it is to be appreciated that one or more input devices 113 may include additional types of input devices and/or less input devices (e.g., the wireless short range radio transceiver 120 and/or one or more auxiliary input devices 119 could be omitted).

[0028] The OTE sound processing unit 106 also comprises the external coil 108, a charging coil 121, a closely-coupled transmitter/receiver (RF transceiver) 122, sometimes referred to as or radio-frequency (RF) transceiver 122, at least one rechargeable battery 123, and an external sound processing module 124. The external sound processing module 124 may comprise, for example, one or more processors and a memory device (memory) that includes sound processing logic. The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device. [0029] The implantable component 112 comprises an implant body (main module) 134, a lead region 136, and the intra-cochlear stimulating assembly 116, all configured to be implanted under the skin/tissue (tissue) 115 of the recipient. The implant body 134 generally comprises a hermetically-sealed housing 138 in which RF interface circuitry 140 and a stimulator unit 142 are disposed. The implant body 134 also includes the internal/implantable coil 114 that is generally external to the housing 138, but which is connected to the transceiver 140 via a hermetic feedthrough (not shown in FIG. ID).

[0030] As noted, stimulating assembly 116 is configured to be at least partially implanted in the recipient’s cochlea. Stimulating assembly 116 includes a plurality of longitudinally spaced intra-cochlear electrical stimulating contacts (electrodes) 144 that collectively form a contact or electrode array 146 for delivery of electrical stimulation (current) to the recipient’s cochlea.

[0031] Stimulating assembly 116 extends through an opening in the recipient’s cochlea (e.g., cochleostomy, the round window, etc) and has a proximal end connected to stimulator unit 142 via lead region 136 and a hermetic feedthrough (not shown in FIG. ID). Lead region 136 includes a plurality of conductors (wires) that electrically couple the electrodes 144 to the stimulator unit 142. The implantable component 112 also includes an electrode outside of the cochlea, sometimes referred to as the extra-cochlear electrode (ECE) 139.

[0032] As noted, the cochlear implant system 102 includes the external coil 108 and the implantable coil 114. The external magnet 152 is fixed relative to the external coil 108 and the implantable magnet 152 is fixed relative to the implantable coil 114. The magnets fixed relative to the external coil 108 and the implantable coil 114 facilitate the operational alignment of the external coil 108 with the implantable coil 114. This operational alignment of the coils enables the external component 104 to transmit data and power to the implantable component 112 via a closely-coupled wireless RF link 131 formed between the external coil 108 with the implantable coil 114. In certain examples, the closely-coupled wireless link 131 is a radio frequency (RF) link. However, various other types of energy transfer, such as infrared (IR), electromagnetic, capacitive and inductive transfer, may be used to transfer the power and/or data from an external component to an implantable component and, as such, FIG. ID illustrates only one example arrangement.

[0033] As noted above, sound processing unit 106 includes the external sound processing module 124. The external sound processing module 124 is configured to convert received input signals (received at one or more of the input devices 113) into output signals for use in stimulating a first ear of a recipient (i.e., the external sound processing module 124 is configured to perform sound processing on input signals received at the sound processing unit 106). Stated differently, the one or more processors in the external sound processing module 124 are configured to execute sound processing logic in memory to convert the received input signals into output signals that represent electrical stimulation for delivery to the recipient. [0034] As noted, FIG. ID illustrates an embodiment in which the external sound processing module 124 in the sound processing unit 106 generates the output signals. In an alternative embodiment, the sound processing unit 106 can send less processed information (e.g., audio data) to the implantable component 112 and the sound processing operations (e.g., conversion of sounds to output signals) can be performed by a processor within the implantable component 112.

[0035] Returning to the specific example of FIG. ID, the output signals are provided to the RF transceiver 122, which transcutaneously transfers the output signals (e.g., in an encoded manner) to the implantable component 112 via external coil 108 and implantable coil 114. That is, the output signals are received at the RF interface circuitry 140 via implantable coil 114 and provided to the stimulator unit 142. The stimulator unit 142 is configured to utilize the output signals to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient’s cochlea. In this way, cochlear implant system 102 electrically stimulates the recipient’s auditory nerve cells, bypassing absent or defective hair cells that normally transduce acoustic vibrations into neural activity, in a manner that causes the recipient to perceive one or more components of the received sound signals.

[0036] As detailed above, in the external hearing mode the cochlear implant 112 receives processed sound signals from the sound processing unit 106. However, in the invisible hearing mode, the cochlear implant 112 is configured to capture and process sound signals for use in electrically stimulating the recipient’s auditory nerve cells. In particular, as shown in FIG. ID, the cochlear implant 112 includes a plurality of implantable sound sensors 153 and an implantable sound processing module 158. Similar to the external sound processing module 124, the implantable sound processing module 158 may comprise, for example, one or more processors and a memory device (memory) that includes sound processing logic. The memory device may comprise any one or more of: Non-Volatile Memory (NVM), Ferroelectric Random Access Memory (FRAM), read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. The one or more processors are, for example, microprocessors or microcontrollers that execute instructions for the sound processing logic stored in memory device.

[0037] In the invisible hearing mode, the implantable sound sensors 153 are configured to detect/capture signals (e.g., acoustic sound signals, vibrations, etc.), which are provided to the implantable sound processing module 158. The implantable sound processing module 158 is configured to convert received input signals (received at one or more of the implantable sound sensors 153) into output signals for use in stimulating the first ear of a recipient (i.e., the processing module 158 is configured to perform sound processing operations). Stated differently, the one or more processors in implantable sound processing module 158 are configured to execute sound processing logic in memory to convert the received input signals into output signals 155 that are provided to the stimulator unit 142. The stimulator unit 142 is configured to utilize the output signals 155 to generate electrical stimulation signals (e.g., current signals) for delivery to the recipient’s cochlea, thereby bypassing the absent or defective hair cells that normally transduce acoustic vibrations into neural activity.

[0038] It is to be appreciated that the above description of the so-called external hearing mode and the so-called invisible hearing mode are merely illustrative and that the cochlear implant system 102 could operate differently in different embodiments. For example, in one alternative implementation of the external hearing mode, the cochlear implant 112 could use signals captured by the sound input devices 118 and the implantable sound sensors 153 in generating stimulation signals for delivery to the recipient.

[0039] The present inventors have recognized specific aspects of the auditory rehabilitation pathway for a recipient of a hearing device, and in particular the recipient of a cochlear implant, can be difficult with different technologies. In particular, independent use of a telephone is characteristically difficult for many hearing device recipients, particular those with sensorineural hearing loss treated with a cochlear implant or other electrically-stimulating hearing device. However, the ability to use the telephone independently correlates strongly with perceived quality of life and is one of the most widespread expectations held by cochlear implant candidates.

[0040] For many cochlear implant recipients, a telephone conversation is perceived as a stressful event, where the cochlear implant recipient often requires another person’s nearby presence in order to have quick assistance in case of, for example, communication failures, an ability to understand conversational context, etc. This inability to use a telephone independently can reduce social connectivity, which in turn can result in an associated decreased quality of life, a depression state, and/or cognitive decline. Challenges experienced with telephone usage may be due to factors such as stimulus type (degraded digital signal quality resulting from telephonic transmission, compared to high fidelity signals captured from live voice), unfamiliar topics and unknown speakers, previous negative experience on the telephone, absence of visual cues such as eye contact, facial expression, lip movement and body language and even background noise.

[0041] For individuals with normal hearing, listening is generally effortless when conditions are ideal. Even under challenging listening situations, a normal auditory system can routinely extract the meanings of signals that are degraded, embedded in noise or lacking visual cues. In contrast, for individuals with hearing loss, listening is generally effortful even when conditions are ideal. The challenge in situations where the auditory-only signal is degraded or embedded in noise is greatly magnified due, at least in part, to the reduction in signal precision/granularity associated with electrical hearing relative to normal hearing (e.g., thousands of hair cell receptors versus only ten to twenty two electrodes). The absence of visual cues makes the listening task even more difficult (requires more effort). The use of linguistic and acoustic context can dramatically enhance the ability of a listener to recognize and understand what they have heard and respond appropriately. This, however, requires the engagement of executive processes that regulate, control, and manage other cognitive processes, such as attention and working memory. Increased cognitive resources are needed, elevating the cognitive load and requiring greater listening effort and motivation to engage meaningfully.

[0042] There are two general processes involved in auditory perception, referred to as “bottom- up processing” and “top-down processing.” Bottom-up processing relates to processing of information as it is received (i.e., in real-time), and refers to the way the cognitive resources are built up from small pieces of sensory information. Top-down processing, however, refers to perception that is driven by cognition and how the brain considers context and applies what the brain knows and what the brain is expecting to perceive and fills in any gaps.

[0043] When processing written information, an individual can control the rate of input by, for example, reading more slowly. However, in hearing and listening tasks, perceptual interpretation and judgement must be made in real-time. This increases the dependence on contextual information. Therefore, speech perception requires the rapid integration of bottom- up processing with top-down processing. [0044] In addition, meaningful participation in a conversation further requires the recipient to respond appropriately and in real-time. Such communication involves the working memory system, which becomes increasingly engaged in understanding speech with degraded input. In general, working memory refers to the ability to simultaneous store and process received information.

[0045] Furthermore, the more resources that are allocated to recovering a degraded input signal, the fewer resources, or cognitive spare capacity (CSC), remain for higher-level processing (e.g., in the sense of processing the meaning rather than just peripheral detection) of the speech input. A lack of cognitive spare capacity can make listening very tiring and effortful. This listening fatigue further impacts the cognitive load experienced by the individual.

[0046] A major challenge for adult aural rehabilitation is that the processes involved in listening are primarily cognitive, yet listening is perceived behaviorally. This has caused intervention approaches to look at cognition and behavior associated with listening as separate phenomena and therefore managed in separate silos. However, listening and communication are not experienced as different things by the individual in daily life.

[0047] As such, the auditory rehabilitation techniques presented herein utilize a combined auditory-cognitive training approach, where cognitive enhancement is embedded within auditory tasks, which is most likely to offer generalized benefits to the real-world listening and communication abilities of adults with hearing loss. The auditory rehabilitation techniques presented therefore provide a framework for recipients to build and practice real time listening comprehension and spoken response skills directly related to meaningful telephone usage in a safe practice environment, such as on their mobile device.

[0048] In other words, presented herein are techniques to increase the cognitive resources specifically associated with telephone usage (i.e., the cognitive resources needed by a hearing device recipient to successfully use a telephone independently). For a matter of convenience, the specific set of cognitive resources that a recipient needs to successfully use a telephone independently are sometimes referred to herein as “telephone cognitive resources.” These telephone cognitive resources, once developed, allow a recipient to successfully process “telephone data” in real-time, while simultaneously form spoken responses thereto. Telephone data is generally comprised of signals delivered to the recipient via a hearing device, where the data has a degraded quality associated with telephone transmission/communication (e.g., received at hearing device from a telephone). As noted elsewhere herein, due to the fact that the telephone data is received from a telephone, the telephone data is associated with a degraded digital signal quality (in contrast to higher fidelity live voice), an absence of visual cues, and often associated with unfamiliar topics, unknown speakers, and/or background noise.

[0049] It is to be appreciated that the “telephone cognitive resources” included cognitive resources that can be used in other contexts and for other cognitive processing. However, as noted, the techniques presented herein are configured to specifically improve the recipient’s cognitive abilities in a manner that allows the recipient to successfully process telephone data, in real-time and formulated appropriate responses in real-time, as needed, thereby enabling independent telephone usage by the recipient. As described further below, the techniques presented herein utilize an interactive training method, including virtual agents and/or conversational artificial intelligence (AI), along with services supported by advanced wireless streaming and connectivity, to increase the cognitive resources of a recipient associated with telephone usage (telephone cognitive resources). As a result, the techniques presented herein enable a hearing device recipient to, over time, use a telephone with reduced cognitive effort, which in turn facilitates independent and regular telephone usage for the recipient. As noted, independent and regular telephone usage for everyday communication and social interaction allows the hearing device recipient to, for example, increase their independence and self- confidence.

[0050] FIG. 2 is general flow diagram illustrating training stages/phases in accordance with an example method 265 presented herein. It is to be appreciated that the specific training stages and training stage structures, as well as the order thereof, shown in FIG. 2 is merely illustrative and that the techniques presented herein can be implemented with different numbers of stages in various combinations.

[0051] In general, the content modules shown in FIG. 2 are designed to complement training to increase bottom-up processing of the signal with more ecologically valid training to boost top-down information processing based on knowledge. Training in the use of various types of context builds on linguistic and world knowledge with the aim being to focus attention to facilitate comprehension, memory, and speed of information processing.

[0052] As described further below, the training method 265 is delivered across several levels/phases starting from introductory warm up phases based on bottom up processing and identification of basic auditory targets. The conversational levels start with passive listening comprehension and progress through to advanced interactive conversations requiring both bottom up and top down information processing and spoken language responses in real time. The method 265 can be implemented on a computing device, such as a mobile phone (smartphone), tablet computer, laptop computer, or other device capability of replicating a telephone call quality. For ease of description, method 265 is described with reference to cochlear implant system 102 and external device 105, where the external device 105 is a mobile phone.

[0053] As shown in FIG. 2, method 265 begins at a first or “introductory” level/phase 266 where the recipient listens a closed set of predetermined auditory targets for telephone usage. The predetermined auditory targets, which are sometimes referred to as “auditory targets,” relate to information/data that is likely to be encountered by the recipient while using a telephone. The auditory targets can include, for example, transactional information (e.g., numbers, etc.) information related to social planning (e.g., dates, names, locations, times, etc.), and/or other types of information. That is, the predetermined auditory targets are not a selection of random words, but instead are predetermined lists designed to introduce the recipient to specific types of information likely to be encountered when using a telephone.

[0054] At 266, the mobile phone 105 emits at least one (e.g., one or a plurality of) acoustic sound signal representing at least one auditory target via an integrated receiver (not shown in FIGs. 1A-1D) and the at least one acoustic sound signal target is received at the cochlear implant system 102 (e.g., via the sound input devices 118 and/or the implantable sensors 153). The at least one acoustic sound signal is processed by the cochlear implant system 102 (e.g., external sound processing module 124 and/or implantable sound processing module 158) and then delivered to the recipient as electrical signals via the electrodes 144, to evoke perception of the at least one auditory target. Following delivery of the at least one auditory target to the recipient (as electrical signals), the recipient uses a user interface (e.g., touch screen) of the mobile phone 105 to identify what she heard. This process may be repeated for each of the predetermined auditory targets or groups of predetermined auditory targets.

[0055] In certain embodiments, following delivery of each of the at least one auditory target, the user interface of the mobile phone 105 can be configured to provide the recipient with a series of text or pictorial responses from which the recipient can select in order to identify what she heard. The mobile phone 105 can provide feedback to the recipient indicating whether or not she correctly perceived the predetermined auditory targets. In other embodiments, the user can enter text at the mobile phone 105 to identify what she heard, while in still other embodiments the mobile phone 105 can be configured to receive a spoken/verbalized indication from the recipient (e.g., use a speech-to-text function). It is to be appreciated that these manners in which the recipient indication is received are merely illustrative and that other techniques can be used in alternative embodiments.

[0056] In certain embodiments, the method 265 is self-guided and the recipient can proceed to the next stage, or repeat the introductory or first phase 266, or portions thereof, and/or at her discretion. In other embodiments, a monitoring engine (e.g., artificial intelligence (AI) engine) can monitor the method 265 and recommend to the recipient when she should proceed to the to the next stage and/or whether the recipient should repeat all or portions of the introductory phase 266.

[0057] Once the recipient is comfortable with the auditory targets (and/or the monitoring engine recommends that the recipient proceed), the method 265 proceeds to a second phase 267. During the second phase 267, the recipient listens to pre-recorded conversations between two voices, the content of which is relatively simple and easy to comprehend. The conversations are provided to the recipient via the cochlear implant system 102, in the manner as described above. Following each conversation, the recipient is then asked to respond to a question about the content of the conversation by selecting from a list of possible answers displayed at a user interface of the mobile phone 105. In general, the conversations provided during the second phase 267 are configured to develop a “relevant threshold level of competence” in the recipient as related to passively listening for meaning and responding to questions of elementary comprehension. That is, as used herein, a “relevant threshold level of competence” refers to the development of a minimum ability in the recipient to passively listen to, and comprehend, elementary or fundamental aspects of a short conversation.

[0058] Again, as noted, method 265 may be self-guided where the recipient can progress to a next stage at her discretion and/or method 265 can be controlled with monitoring engine to recommend to the recipient when she should proceed to the to the next stage and/or whether the recipient should repeat all or portions of a present stage.

[0059] In certain embodiments, the recipient cannot progress from the second phase 267 until the recipient has demonstrated the relevant threshold level of competence. Development of the relevant threshold level of competence could be evidenced, for example, by the recipient correctly answering a certain number or percentage of context questions. [0060] Upon completion of the second phase, the method 265 proceeds to a third phase 268. During the third phase 268, the recipient again listens to pre-recorded conversations between two voices, but the conversations are more difficult for the recipient to follow (e.g., increased listening complexity). In particular, the conversations in the third phase 268 include more conversational turns, relative to the conversations in the second phase 267, and are designed to extend the recipient’s auditory memory.

[0061] Again, the pre-recorded conversations are provided to the recipient via the cochlear implant system 102, in the manner as described above. Following each conversation, the recipient is then asked to respond to one or more questions about the content of the conversation by, for example, selecting from a list of possible answers displayed at a user interface of the mobile phone 105. As above, method 265 may be self-guided where the recipient can progress to a next phase at her discretion and/or method 265 can be controlled with monitoring engine to recommend to the recipient when she should proceed to the to the next stage and/or whether the recipient should repeat all or portions of a present stage.

[0062] In certain embodiments, the recipient cannot progress from the third phase 268 until the recipient has demonstrated an acceptable level of competence with the increased conversation turns. The acceptable level of competence with the increased conversation turns could be evidenced, for example, by the recipient correctly answering a certain number or percentage of context questions.

[0063] Upon completion of the third phase, the method 265 proceeds to a fourth phase 269. During the fourth phase 269, the recipient again listens to pre-recorded conversations between two voices, but the conversations are more difficult for the recipient to follow (e.g., even greater listening complexity). In particular, the conversations in the fourth phase 269 include even more conversational turns, relative to the conversations in the third phase 268, and but also add in negotiation and clarification scenarios in the content. As used herein, clarification scenarios are conversation portions in which one of the two voices clarifies or corrects statements made by the other voice (e.g., “Not five guests, only four guests,” and so on). As used herein, negotiation scenarios are conversation portions in which the two voices perform a real-time negotiation (e.g., First Voice: We don’t have tables available at 6:00 PM. I can give you a table at 5:30 PM.”; Second Voice: “No, are there tables available at 6:30 or 7:00 PM?” and so on). [0064] Again, the pre-recorded conversations in the fourth phase 269 are provided to the recipient via the cochlear implant system 102, in the manner as described above. Following each conversation, the recipient is then asked to respond to one or more questions about the content of the conversation by, for example, selecting from a list of possible answers displayed at a user interface of the mobile phone 105. Relative to the second and third phases, the questions in the fourth phase 269 are more difficult and can relate to, for example, the negotiation and clarification scenarios presented during the conversations. In certain embodiments, the recipient cannot progress from the fourth phase 269 until the recipient has demonstrated an acceptable level of competence with the negotiation and clarification scenarios. The acceptable level of competence with negotiation and clarification scenarios could be evidenced, for example, by the recipient correctly answering a certain number or percentage of context questions.

[0065] Upon completion of the fourth phase 269, the method 265 proceeds to a fifth phase 270. The fifth phase 270 is different from prior phases in that this first phase requires real-time interaction between the recipient and a virtual assistant (hot). That is, the fifth phase 270 newly introduces the need for the recipient to actively respond by providing her own spoken answers, in a manner that is similar to a real-world interactive conversation. In this fifth phase 270, the complexity of the conversation is relatively straightforward and may be less complex than the conversations in, for example, the third phase 268 and/or the fourth phase 269. However, the increased cognitive difficulty associated with the fifth phase 270 arises due to the recipient having to interactively respond, instead of passively listening as in the earlier phases. In general, the fifth phase 270 requires the recipient to answer common questions encountered in typical telephone conversations using known contexts.

[0066] In the fifth phase 270, and subsequent phases described below, the techniques presented herein can use speech recognition with natural language processing/understanding to receive spoken/verbalized responses from the recipient during the virtual conversation. In addition, voice and/or text interfaces and/or conversational artificial intelligence can be used to create hots or virtual agents to dynamically generate conversation portions/segments that are delivered to the recipient (as acoustic sound signals received via the cochlear implant system 102). That is, the external device 105 is configured (e.g., via conversational artificial intelligence) to tailor the conversation to the responses of the recipient, thereby allowing interactive telephone experiences that do not feel overly scripted. Features like error handling can ensure conversations remain on-track, while integration with a content management backend (CMS) backend will allow for personalization for a given recipient.

[0067] The dynamically generation of conversation portions enable the recipient to develop the cognitive resources needed to listen to a conversation, as well as the cognitive resources associated with real-time communication skills to deal independently with everyday transactional telephone tasks, such as, booking tickets or restaurants, ordering take-out food, making appointments and social plans.

[0068] Returning to the example of FIG. 2, upon completion of the fifth phase 270, the method 265 proceeds to a sixth phase 271. Similar to the fifth phase 270, the sixth phase 271 also requires real-time interaction between the recipient and a virtual assistant (hot). However, the sixth phase 271 again increases the listening complexity by asking the recipient to listen to longer and/or more complex sentences, with additional conversation turns, in the virtual conversation. The sixth phase 271 can use known contexts, but also introduces clarification scenarios (e.g., increased response complexity), which are conversation portions that require the recipient to employ/perform clarification strategies in real-time. That is, the virtual conversation in the sixth phase 271 require the recipient to add clarifications as part of her response (e.g., “Not five guests, only four guests,” and so on).

[0069] Upon completion of the sixth phase 270, the method 265 proceeds to a seventh phase 272. Similar to the sixth phase 271, the seventh phase 272 also requires real-time interaction between the recipient and a virtual assistant (hot). However, the seventh phase 272 again increases the complexity by asking the recipient to listen to still longer and/or more complex sentences, with additional conversation turns, in the virtual conversation. The seventh phase 272 also adds in “sabotage” or “negotiation” scenarios (e.g., even greater increased response complexity), which are conversation portions that require the recipient to employ negotiation strategies in real-time. That is, the virtual conversation in the seventh phase 272 requires the recipient to perform negotiation (e.g., as part of her spoken response) (e.g., Virtual Assistant: We don’t have tables available at 6:00 PM. I can give you a table at 5:30 PM.” Recipient: “No, are there tables available at 6:30 or 7:00 PM?” and so on).

[0070] As noted above, the pre-recorded conversations and virtual conversations are provided to the recipient via the cochlear implant system 102, in the manner as described above. In the virtual conservations, the recipient speaks her responses, which are captured by the mobile phone 105. Also as noted above, method 265 may be self-guided where the recipient can progress to a next phase at her discretion and/or method 265 can be controlled with monitoring engine to recommend to the recipient when she should proceed to the to the next stage and/or whether the recipient should repeat all or portions of a present stage.

[0071] In the example of FIG. 2, the software at the external device 105 is configured to control volume, word rate and pitch and modify a variety of spoken language characteristics such as pauses and the degree of emphasis on given words. Such flexibility allows the techniques presented herein to build increasing levels of listening and response complexity including emotional context of speech for the hearing device recipient. Other features that may be used to add complexity or functionality include the ability to add sound files to play during conversation, such as playing environmental background noise or simulated telephone signal degradation/distortion, can be used in any or all of the different phases.

[0072] FIG. 3 illustrates an example arrangement for a suitable external device (computing device) configured to implement aspects of the techniques presented herein. Computing devices, environments, or configurations that can be suitable for use with examples described herein include, but are not limited to, personal computers, server computers, hand-held devices, laptop devices, multiprocessor systems, microprocessor-based systems, programmable consumer electronics (e.g., smart phones), network PCs, minicomputers, mainframe computers, tablet computers (tablets), distributed computing environments that include any of the above systems or devices, and the like. The external devices presented herein can be a single virtual or physical device operating in a networked environment over communication links to one or more remote devices. The remote device can be a hearing device or hearing device system (e.g., cochlear implant 112 or cochlear implant system 105 of FIG. ID) an auditory prosthesis), a personal computer, a server, a router, a network personal computer, a peer device or other common network node. For ease of description, the external device shown in FIG. 3 is referred to as external device 305, and can represent a basic arrangement for external device 105 of FIGs. 1A-1D.

[0073] In its most basic configuration, the external device 305 includes at least one processing unit 375 and memory 376. The processing unit 375 includes one or more hardware or software processors (e.g., Central Processing Units) that can obtain and execute instructions. The processing unit 375 can communicate with and control the performance of other components of the computing system 305. [0074] The memory 376 is one or more software or hardware-based computer-readable storage media operable to store information accessible by the processing unit 375. The memory 376 can store, among other things, instructions executable by the processing unit 375 to implement applications or cause performance of operations described herein, as well as other data. The memory 376 can be volatile memory (e.g., RAM), non-volatile memory (e.g., ROM), or combinations thereof. The memory 376 can include transitory memory or non-transitory memory. The memory 376 can also include one or more removable or non-removable storage devices. In examples, the memory 376 can include RAM, ROM, EEPROM (Electronically- Erasable Programmable Read-Only Memory), flash memory, optical disc storage, magnetic storage, solid state storage, or any other memory media usable to store information for later access. In examples, the memory 376 encompasses a modulated data signal (e.g., a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal), such as a carrier wave or other transport mechanism and includes any information delivery media. By way of example, and not limitation, the memory 376 can include wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media or combinations thereof. In certain embodiments, the memory 376 comprises telephone enhancement logic 377 that, when executed, enables the processing unit 375 to perform aspects of the techniques presented.

[0075] In the illustrated example, the external device 305 further includes a network adapter 378, one or more input devices 379, and one or more output devices 380. The one or more input devices 379 and the one or more output devices 380 are sometimes collectively referred to herein as a user interface 382 and can comprise the same or different components. The external device 305 can include other components, such as a system bus, component interfaces, a graphics system, a power source (e.g., a battery), among other components.

[0076] The network adapter 378 is a component of the external device 305 that provides network access (e.g., access to at least one network 381). The network adapter 378 can provide wired or wireless network access and can support one or more of a variety of communication technologies and protocols, such as ETHERNET, cellular, BLUETOOTH, near-field communication, Radio Frequency (RF), infrared (IR), among others. The network adapter 378 can include one or more antennas and associated components configured for wireless communication according to one or more wireless communication technologies and protocols.

[0077] The one or more input devices 379 are devices over which the external device 305 receives input from a user, such as a recipient during method 265 described above. The one or more input devices 379 can include physically-actuatable user-interface elements (e.g., buttons, switches, or dials), touch screens, keyboards, mice, pens, and voice/sound input devices, among others input devices.

[0078] The one or more output devices 380 are devices by which the external device 305 is able to provide output to a user. The output devices 380 can include, displays, receivers, and/or speakers, among other output devices.

[0079] It is to be appreciated that the arrangement for external device 305 shown in FIG. 3 is merely illustrative and that aspects of the techniques presented herein may be implemented at a number of different types of systems/devices. For example, the external device 305 could be a laptop computer, tablet computer, mobile phone, surgical system, etc.

[0080] FIG. 4 is a flowchart of an example method 490, in accordance with certain embodiments presented herein. Method 490 begins at 492 where an external device and a hearing device deliver one or more predetermined auditory targets to a recipient of the hearing device, wherein the auditory targets comprise information typically processed via a telephone. At 494, the external device and the hearing device deliver one or more pre-recorded conversations to the recipient. At 496, the external device conducts, via the hearing device, one or more interactive virtual conversations with the recipient.

[0081] FIG. 5 is a flowchart of an example method 590, in accordance with certain embodiments presented herein. Method 590 begins at 592 where a recipient of a hearing device is introduced to a set of predetermined auditory targets. At 594, an external device delivers a plurality of pre-recorded conversations to the recipient with increasing listening complexity. At 596, the external device conducts, with a virtual telephone assistant, a plurality of virtual conversations with the recipient, wherein the plurality of virtual conversations are conducted within increasing listening or response complexity.

[0082] FIG. 6 is a flowchart of an example method 690, in accordance with certain embodiments presented herein. Method 690 begins at 692 where an external device trains a recipient of a hearing device to extract, in real-time, content of received stimulation signals associated with a degraded signal quality associated with telephones and in the absence of visual cues. At 694, the external device conducts a plurality of virtual conversations with the recipient, wherein the plurality virtual conversations are conducted with increasing levels of listening and response complexity. [0083] As should be appreciated, while particular uses of the technology have been illustrated and discussed above, the disclosed technology can be used with a variety of devices in accordance with many examples of the technology. The above discussion is not meant to suggest that the disclosed technology is only suitable for implementation within systems akin to that illustrated in the figures. In general, additional configurations can be used to practice the processes and systems herein and/or some aspects described can be excluded without departing from the processes and systems disclosed herein.

[0084] This disclosure described some aspects of the present technology with reference to the accompanying drawings, in which only some of the possible aspects were shown. Other aspects can, however, be embodied in many different forms and should not be construed as limited to the aspects set forth herein. Rather, these aspects were provided so that this disclosure was thorough and complete and fully conveyed the scope of the possible aspects to those skilled in the art.

[0085] As should be appreciated, the various aspects (e.g., portions, components, etc.) described with respect to the figures herein are not intended to limit the systems and processes to the particular aspects described. Accordingly, additional configurations can be used to practice the methods and systems herein and/or some aspects described can be excluded without departing from the methods and systems disclosed herein.

[0086] Similarly, where steps of a process are disclosed, those steps are described for purposes of illustrating the present methods and systems and are not intended to limit the disclosure to a particular sequence of steps. For example, the steps can be performed in differing order, two or more steps can be performed concurrently, additional steps can be performed, and disclosed steps can be excluded without departing from the present disclosure. Further, the disclosed processes can be repeated.

[0087] Although specific aspects were described herein, the scope of the technology is not limited to those specific aspects. One skilled in the art will recognize other aspects or improvements that are within the scope of the present technology. Therefore, the specific structure, acts, or media are disclosed only as illustrative aspects. The scope of the technology is defined by the following claims and any equivalents therein.

[0088] It is also to be appreciated that the embodiments presented herein are not mutually exclusive and that the various embodiments may be combined with another in any of a number of different manners.