Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
BLUETOOTH ENABLED INTERCOM WITH HEARING AID FUNCTIONALITY
Document Type and Number:
WIPO Patent Application WO/2023/130105
Kind Code:
A1
Abstract:
A hearing aid, comprising a microphone configured to produce a microphone output signal representing sounds transduced by the microphone; an earphone speaker configured to convert an equalized output electrical signal into acoustic waves; a Bluetooth wireless transceiver; and an automated processor configured to spot a plurality of different keywords; and selectively control a Bluetooth communication partner dependent on the spotted keyword.

Inventors:
POLTORAK ALEXANDER (US)
Application Number:
PCT/US2023/010014
Publication Date:
July 06, 2023
Filing Date:
January 03, 2023
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
POLTORAK ALEXANDER (US)
POLTORAK TECH LLC (US)
International Classes:
H04R25/00; G06F3/16; H04W4/80
Domestic Patent References:
WO2019199706A12019-10-17
Foreign References:
US20140023217A12014-01-23
US20100208924A12010-08-19
US20100086156A12010-04-08
US20120189140A12012-07-26
EP2755403B12020-10-14
Attorney, Agent or Firm:
HOFFBERG, Steven (US)
Download PDF:
Claims:
- 46 -

CLAIMS

1. A hearing aid, comprising: an input port configured to receive a signal representing sounds; an output port configured to output an electrical signal representing acoustic waves; a wireless transceiver, configured to bidirectionally communicate audio signals; and a digital processor configured to: receive an audio signal from the wireless transceiver; define the output electrical signal based on the signal from the input port, the communicated audio signals, and an audio egualization profile; and implement a speech controlled user interface, configured to select a counterparty wireless transceiver for communication through the wireless transceiver from a plurality of counterparty wireless transceivers based on a spoken command.

2. The hearing aid according to claim 1 , further comprising a housing, wherein the wireless transceiver, digital processor and a self-contained power source are contained within the housing.

3. The hearing aid according to claim 2, further comprising a microphone configured to produce the signal representing sounds and a speaker configured to generate the acoustic waves, the microphone and the speaker being contained within the housing, wherein the housing is an intraaural housing, and the microphone comprises a bone conduction microphone.

4. The hearing aid according to any of claims 1 to 3, further comprising at least one sensor selected from the group consisting of an accelerometer, a gyroscope, an absolute position sensor, and a relative position sensor, wherein the digital processor is further configured to select the counterparty dependent on a signal from the sensor.

5. The hearing aid according to any of claims 1 to 4, wherein the wireless transceiver comprises at least one of a Bluetooth transceiver and a Bluetooth Low Energy transceiver configured to implement a mesh network.

6. The hearing aid according to any of claims 1 to 5, wherein the digital processor is further configured to perform keyword spotting from among a plurality of predetermined keywords.

7. The hearing aid according to any of claims 1 to 6, wherein the digital processor is further configured to perform keyword spotting using a convolutional neural network.

8. The hearing aid according to any of claims 1 to 7, wherein the spoken command comprises a human name.

9. The hearing aid according to any of claims 1 to 8, further comprising at least one mediator network hub device communicatively coupled with the wireless transceiver, configured to facilitate communications between the wireless transceiver and a respective wireless transceiver of another hearing aid.

10. The hearing aid according to claim 9, wherein the wireless transceiver is configured to communicate through a Bluetooth mesh network.

11. A Bluetooth enabled intercom system, comprising: at least one mediator network hub device communicatively coupled with a Bluetooth or BLE network interface; a plurality of Bluetooth enabled hearing aid devices, each comprising a Bluetooth module, a microphone with a data conversion module, an amplifier and a speaker. - 47 -

12. The Bluetooth enabled intercom system according to claim 11 , wherein the microphone comprises a bone conduction microphone.

13. The Bluetooth enabled intercom system according to any of claims 11 to 12, wherein at least two Bluetooth enabled hearing aid devices are communicatively coupled through the Bluetooth or BLE network.

14. The Bluetooth enabled intercom system according to any of claims 11 to 13, wherein at least two Bluetooth enabled hearing aid devices are communicatively coupled through the at least one mediator network hub device.

15. A method of intercommunication, comprising: receiving and converting into an electrical signal, by a microphone of a first Bluetooth enabled hearing aid device; a sound signal to be transmitted from a first user to at least one second Bluetooth enabled hearing aid device of at least one second user; converting, using a first data conversion module, the electrical signal into a digital data capable of being transmitted over a Bluetooth network; transmitting, by a first Bluetooth module, the digital data to at least one second Bluetooth enabled hearing aid device communicatively coupled within the Bluetooth network; receiving, by a second Bluetooth module of the at least one second Bluetooth enabled hearing aid device, the digital data transmitted by the first Bluetooth enabled hearing aid device; re-converting, by a second data conversion module of the at least one second Bluetooth enabled hearing aid device, received digital data again into an analog sound signal; amplifying, by a second amplifier of the at least one second Bluetooth enabled hearing aid device, the reconverted analog sound signal; and emitting, by a second speaker of the at least one second Bluetooth enabled hearing aid device, the amplified analog sound signal into an ear of the at least one second user.

16. The method of intercommunication of claim 15, wherein the first and the at least one second Bluetooth enabled hearing aid device is communicatively coupled over the Bluetooth network directly using the first Bluetooth module and the second Bluetooth module in a mesh network.

17. The method of intercommunication of any of claims 15 and 16, wherein the first and the at least one second Bluetooth enabled hearing aid device communicatively couples over the Bluetooth network through at least one mediator network hub device in a star network.

18. The method of intercommunication of any of claims 15 to 17, wherein the at least one mediator network hub device pairs with the first and the at least one second Bluetooth enabled hearing aid device to communicatively couple them over the Bluetooth network.

19. The method of intercommunication of any of claims 15 to 18, wherein the at least one mediator network hub device is selected from the group consisting of a smartphone, a laptop, wearable smart device, a dedicated network hub device, a personal smart assistant, a Wi-Fi router or Wi-Fi card, a switch, an internet dongle, any third party domestic smart assistant and any other portable wireless network device.

20. The method of intercommunication of any of claims 15 to 19, wherein the microphone of the first Bluetooth enabled hearing aid device comprises a bone conduction microphone, and the first Bluetooth enabled hearing aid device is contained within an intraaural housing with a self-contained power supply. - 48 -

21. A hearing aid, comprising: an input port configured to receive a signal representing sounds; an output port configured to output an electrical signal representing acoustic waves; a wireless transceiver, configured to communicate audio signals; and a digital processor configured to: receive an audio signal from the wireless transceiver; define the output electrical signal based on the signal represent sounds, the communicated audio signals, and an audio egualization profile; and implement a speech-controlled user interface, configured to select a counterparty wireless transceiver for communication through the wireless transceiver from a plurality of counterparty wireless transceivers based on a spoken command.

22. The hearing aid according to claim 21 , further comprising a housing, wherein the wireless transceiver, the digital processor, and a battery powering the wireless transceiver and the digital processor are contained within the housing.

23. The hearing aid according to claim 22, further comprising a microphone within the housing, configured to produce the signal representing sounds, and a speaker configured to generate the acoustic waves, wherein the housing is an intraaural housing, and the microphone comprises a bone conduction microphone.

24. The hearing aid according to any of claims 21 to 23, further comprising at least one sensor selected from the group consisting of an accelerometer, a gyroscope, an absolute position sensor, and a relative position sensor, wherein the digital processor is further configured to select the counterparty dependent on a signal from the sensor.

25. The hearing aid according to any of claims 21 to 24, wherein the wireless transceiver comprises at least one of a Bluetooth transceiver and a Bluetooth Low Energy transceiver configured to implement a mesh network between a plurality of the hearing aids.

26. The hearing aid according to any of claims 21 to 25, wherein the digital processor is integrated and further configured to perform keyword spotting from among a plurality of predetermined keywords.

27. The hearing aid according to any of claims 21 to 26, wherein the digital processor comprises a convolutional neural network.

28. The hearing aid according to any of claims 21 to 27, wherein the spoken command comprises a human name, and wherein the human name uniguely corresponds to a human associated with the counterparty wireless transceiver.

29. The hearing aid according to any of claims 21 to 28, further comprising at least one router communicatively coupled with the wireless transceiver, configured to facilitate communications between the wireless transceiver and the counterparty wireless transceiver.

30. The hearing aid according to claim 29, wherein the wireless transceiver is configured to communicate through a Bluetooth mesh network.

31. A Bluetooth enabled intercom system, comprising: at least one communication router communicatively coupled with a Bluetooth or BLE network interface; a plurality of Bluetooth enabled hearing aid devices, each comprising a Bluetooth module, a microphone, an amplifier, a speaker, and a processor configured to control a point to point communication between the Bluetooth enabled hearing aid device and a respective other Bluetooth enabled hearing aid device.

32. The Bluetooth enabled intercom system according to claim 31 , wherein the microphone comprises a bone conduction microphone.

33. The Bluetooth enabled intercom system according to any of claims 31 and 32, wherein at least two Bluetooth enabled hearing aid devices are directly communicatively coupled through the Bluetooth or BLE network.

34. The Bluetooth enabled intercom system of any of claims 31 and 32, wherein at least two Bluetooth enabled hearing aid devices are communicatively coupled through the at least one communication router.

35. A method of intercommunication, comprising: receiving and converting into a digitized electrical signal, by a microphone of a first hearing aid device, a sound signal to be transmitted from a first user; determining an identifier of a second hearing aid device by speech received through the first hearing aid device; transmitting, by a first personal area network transceiver module, the digitized electrical signal to at least one second personal area network transceiver module of a second hearing aid device communicatively coupled within a personal area network; receiving, by the second personal area network transceiver module, the digitized electrical signal transmitted by the first personal area network transceiver module; and egualizing and reproducing the sound signal by an amplifier of the second hearing aid device.

36. The method of intercommunication of claim 35, wherein the first personal area network transceiver module and the second personal area network transceiver module communicate through a mesh network.

37. The method of intercommunication of claim 35, wherein the first personal area network transceiver module and the second personal area network transceiver module communicate through at least one communication router in a star network.

38. The method of intercommunication of claim 37, wherein the at least one communication router pairs with the first and the at least one second the first personal area network transceiver module to communicatively couple them over the personal area network.

39. The method of intercommunication of any of claims 37 to 38, wherein the at least one communication router is selected from the group consisting of a smartphone, a laptop, wearable smart device, a dedicated network hub device, a personal smart assistant, a Wi-Fi router or Wi-Fi card, a switch, an internet dongle, any third party domestic smart assistant and any other portable wireless network device.

40. The method of intercommunication of any of claims 35 to 39, wherein the microphone of the first hearing aid device comprises a bone conduction microphone, and the first hearing aid device is contained within an intraaural housing with a self-contained power supply.

Description:
BLUETOOTH ENABLED INTERCOM WITH HEARING AID FUNCTIONALITY

TECHNICAL FIELD OF THE INVENTION

[0001] The present invention relates generally to intercom aid devices, more particularly to hearing aids with voice intercommunication (“intercom”).

BACKGROUND OF THE INVENTION

[0002] About 18 percent of adults aged 20-69 have speech-freguency hearing loss in both ears from among those who report 5 or more years of exposure to very loud noise at work, as compared to 5.5 percent of adults with speech- freguency hearing loss in both ears who report no occupational noise exposure. One in eight people in the United States (13 percent, or 30 million) aged 12 years or older has hearing loss in both ears, based on standard hearing examinations. While, based on one another calculation, about 2 percent of adults aged 45 to 54 have disabling hearing loss. The rate increases to 8.5 percent for adults aged 55 to 64. Nearly 25 percent of those aged 65 to 74 and 50 percent of those who are 75 and older have disabling hearing loss According to one state in the US, about 28.8 million U.S. adults could benefit from using hearing aids. 28.5 percent of hearing impaired Americans uses hearing aids. Among adults aged 70 and older with hearing loss that could benefit from hearing aids, fewer than one in three (30 percent) has ever used them. [0003] The hearing aid devices normally include at least one microphone to transduce sound signals surrounding s user. But, the sound intensity in such conventional hearing aid devices significantly decreases with increase in distance. It has been demonstrated that listeners wearing the hearing aid should be no more than 1.8 meters away from the signal of interest for optimal speech intelligibility. Therefore, when hearing impaired people are in a domestic environment, and wearing hearing aids, communication difficulties may persist.

[0004] The intelligibility of a signal of interest may further diminish when environmental noise and reverberation is present in the background of the hearing impaired listeners. Hearing impaired couple people living together have to be within a specific range to listen to each. This makes communication challenging even though they are in close proximity. [0005] Further, conventional hearing aids are not effective in communication in a scenario when a couple or a group of people are travelling and need to be in continuous communication with each other.

[0006] Hearing aids are known with Bluetooth connectivity, and which can pair with a Bluetooth telephone. Hearing aids have limited or absent user interfaces, and therefore from a user perspective, the Bluetooth compatible hearing aid must simply pair with the phone, automatically answer and terminate a call.

[0007] Similar problem may be faced by a group of bikers travelling together communicating to each other while riding the motorcycles and wearing helmets. The group of bikers may reguire being in continuous communication to give directions to each other or provide essential information about their well being in order to avoid any accidents. The helmets may prevent the biker in communicating easily with each other. The conventional intercoms present in the market to allow communication while wearing a helmet provide both short range intercom and connectivity to Bluetooth host devices such as smartphones. See, www.fodsports.com/fx6-intercom/

[0008] US 20180035207describes a hearing aid headphone intercom system is provided which includes one primary headphone, at least one secondary headphone and a terminal device. The primary headphone is disposed with a first Bluetooth module, a first ISM (Industrial Scientific Medical) module, a first microphone set, and a first wired connection module. The secondary headphone is arranged with a second Bluetooth module, a second ISM (Industrial Scientific Medical) module, and a second microphone set. The secondary headphone is connected to the primary headphone by the second ISM module while the terminal device is connected to either the primary headphone or the secondary headphone by a third Bluetooth module or an input cable. Audio signals received by the primary headphone are shared with the secondary headphone. Users of the primary headphone and the secondary headphone can interact with each other.

[0009] US 9,712,662 discloses a method of extending an intercom communication range in which during the pairing process between a first headset communication device and a second headset communication device, cellular related parameters are provided and stored in a memory module of each headset communication device. The parameters include common Bluetooth reguired parameters, such that the cellular related parameters of the first headset including cellular phone number associated with the second headset and vice versa. An intercom communication between both headsets is established via a Bluetooth channel and in case of intercom communication loss at said Bluetooth channel or unavailable Bluetooth link during said intercom communication, the intercom communication is temporarily routed to an alternate cellular communication channel by initiating a cellular call using the stored cellular phone number of the second headset. [0010] US 20150201060 describes a wireless Bluetooth apparatus with intercom and broadcasting functions, and an associated operating method in which the wireless Bluetooth apparatus receives a voice message from a first user, divides it into a first partial voice message and a second partial voice message, and transmits them to a first electronic apparatus corresponding to the wireless Bluetooth apparatus through wireless Bluetooth technology. The first electronic apparatus uses a communication application program to transmit the second partial voice message to a second electronic apparatus corresponding to a second user corresponding to the first partial voice message. The second electronic apparatus uses the communication application program to receive the second partial voice message and transmits the second partial voice message to another wireless Bluetooth apparatus corresponding to the second electronic apparatus through wireless Bluetooth technology. Voice recognition is discussed. See also US 9,363,358; and TW 1519087.

[0011] US 20150148100 describes intercom handset for cellular phones and smartphones with e.m.f. shield with a ferritic layer for shielding against the electromagnetic field or e.m.f. emitted by mobile phone devices. The intercom handset can be rapidly inserted onto the cellular phone and used directly connected to the latter by means of an extensible and retractable cable, contained inside its body, or by means of a Bluetooth connection. The intercom handset can also be used separated from the phone, by means of said extensible and retractable cable, contained inside its body, or by means of a wireless connection by means of the built-in Bluetooth module, whose battery is charged by means of an inductive load system that charges the battery of the built-in Bluetooth module, using the radio transmission energy of the cellular phone or smartphone, normally dispersed in the air during the periodic search for the cellular transmission network and during the telephone conversation.

[0012] US 2018/0054683 discusses a body worn hearing system comprises a hearing device, e.g. a hearing aid, and a separate microphone unit for picking up a voice of the user. The hearing device comprises a forward path comprising an input unit for providing an electric input signal representative of sound in the environment, a signal processing unit for providing a processed signal, and an output unit for generating stimuli perceivable as sound when presented to the user based on said processed signal. The microphone unit comprises a multitude M of microphones, and a multi-input noise reduction system for providing an estimate S of a target signal s comprising the user's voice, and comprising a multi-input beamformer filtering unit operationally coupled to said multitude of microphones. The hearing device and the microphone unit are configured to receive and transmit an audio signal from/to a communication device, respectively, and for establishing a communication link between them for exchanging information. The hearing system comprises a control unit configured to estimate a current distance between the user's mouth and the microphone unit, and to control the multiinput noise reduction system in dependence of said distance.

[0013] WO2019/114531 provides a Bluetooth chip having a hearing aid function, and a Bluetooth headset. The Bluetooth chip comprises a chip main body, an audio interface module being integrated in the chip main body, the audio interface module comprising an audio input unit and an audio output unit, the audio input unit being connected to a microphone, and the audio output unit being connected to a receiver, characterized in that the chip main body is further internally integrated with a hearing aid module connected to the audio interface module, the hearing aid module receiving an audio signal inputted by the audio input unit, performing hearing aid processing, and then returning same to the audio output unit. The present invention solves the drawbacks in the prior art that a Bluetooth chip and a hearing-aid chip of a Bluetooth hearing-aid headset need to work independently or to work by means of an app on a mobile phone.

[0014] US 9,520,042 describes a smoke detector has enhanced audio and communications capabilities that allow audio content to be provided at each smoke detector location. This audio content may be music, intercom, doorbell actuation and radio programs. The smoke detector may also include a microphone for monitoring and two way communications between two or more smoke detectors, a intercom panel at a doorbell location, controlling lights in an area of the smoke detector with voice commands, and further providing for speakerphone answering and communications capabilities. Audio content and control may be provided to the smoke detector with a software program application running on a personal computer, tablet computer and a smart cell phone. A smoke detector may further be controlled with a Bluetooth or infrared handheld controller located in an area proximate to the smoke detector.

[0015] US 10,242,565 describes examples of systems and methods of wireless remote control of appliances using a hearing device, for example upon manual activation of a switch placed in the concha cavity behind the tragus. In some examples, the hearing device includes one or more manually activated switches, a wireless antenna, and a battery cell. In some examples, the wireless electronics include low energy Bluetooth capability. The appliance may be any device with wireless control capability, for example an electronic lock, a thermostat, an electronic lighting, a telephone, a kitchen appliance, a medical alert system, a television, a medical device, and a smart glass.

[0016] US 20100285750 describes a wireless audio stereo and intercom system in which standard Bluetooth wireless audio features are extended to provide full wireless stereo headset capabilities, and maintain backward compatibility with standard Bluetooth devices. The system is a full duplex, high fidelity, low latency, two-way digital wireless audio headset with microphone intercom communication system that deploys custom programmed Bluetooth radio transceiver devices. [0017] US 20060244825 describes a multi-function communication and navigation system including a GPS and MCU module, a memory module, a sensor module, a power module, a HUD module, an audio input/output module and a Bluetooth module. The system can be a recorder for recording relevant mobile information in the memory module. The HUD module can project relevant information such as velocity, direction, time, date etc. onto a display area. Moreover, the communication message can be transformed into audible sounds for broadcasting, so that the system can be a wireless intercom device for communicating with other systems. Moreover, when a sensor module of the system receives a vibrating message such as a collision message, the system can automatically dial an emergency number and provide relevant location information via the Bluetooth module. [0018] US 10,284,703 discloses a portable full duplex intercom system using Bluetooth and in particular to a portable full duplex intercom system using Bluetooth that may be used in a vehicle to allow the people in the vehicle to communicate with each other.

[0019] US 6,405,027 discloses a mobile communication handset which is configured for communicating by a cellular or PCS or cordless call over a first wireless link to a base station, and is also configured for communicating with one or more other communication devices over other respective wireless direct device-to-device second links implemented by the Bluetooth Intercom Profile, is provided with functionality to carry on a group call by appropriately combining speech signals carried by the various links and by the handset. The first wireless link may be a cordless link which is implemented by the Bluetooth Telephony Profile. Alternatively, the first link may also be configured as wireless direct device-to-device link, in which case all wireless links involved in the group call are implemented by the Bluetooth Intercom Profile.

[0020] CN 110708615 discloses an intercom system and an intercom method realized based on TWS earphones, the system comprises a mobile terminal and the TWS earphones, the TWS earphones comprise a left earphone and a right earphone, the left earphone and the right earphone are respectively provided with a first Bluetooth module, a loudspeaker, a microphone and a key, the mobile terminal is provided with a wireless communication module, an intercom object setting module, a noise reduction module and a main control module, and the wireless communication module comprises a second Bluetooth module. The method comprises the following steps: 1 ) the TWS earphone is connected with the mobile terminal through Bluetooth; 2) setting a target earphone through a talkback object setting module; 3) pressing a key, and sending recording reguest information to the mobile terminal; 4) a main control module of the mobile terminal determines to send the earphone; 5) recording by a microphone to obtain a first audio freguency; 6) the noise reduction module performs noise reduction processing on the first audio to obtain a second audio; 7) the target earpiece plays the first audio and the second audio. The invention can reduce the cost of talkback and improve the quality of talkback. Voice recognition at a server is disclosed.

[0021] US 10,028,357 discloses a networked light for illumination and intercom for communications in a single housing, with voice command and control, hands-free. The system in a housing configured to conventional looking lamp, bulb, fixture, lighting devices, suitable for a direct replacement of conventional illuminating devices typical found in homes or buildings. A network of such voice command and control systems may be further monitored and controlled from a base station that facilitates programming, communications, and higher functionality therebetween. The system provides speech recognition for powering on and off, dimming, brightening, and adjusting the lighting to preset, night and emergency settings. The voice recognition command controls the intercom to be active and attentive to reguests, connecting two or more locations within a home or building structure, for speech exchanges in communications, via radio freguency transmitting and receiving of signal messages between the individual light and intercom system devices within a network of devices.

[0022] EP-3095252 and WO 2015109002 disclose a universal wearable computing device relates to a hearing assistance system, device, method, and apparatus that provide a discreet approach to user hearing assistance, without relying on a conventional hearing aid. The hearing assistance system and the reguisite electronics may be incorporated into frames that also function as eyeglasses with earphone(s) that may be connected to the frame to assist user hearing. An earphone may be configured with minimal electronics, such that a power source enable sound transmissions to the ear, is provided by a connection to the frame of the eyeglasses. In another example, the earphone is configured without any electronics and sound is transmitted to the user/listener's ear(s) via a psychoacoustic system. The sound quality of the transmissions to the earphones may be optimized using a tuning/equalizer application operating from a computing device, such as an app on a mobile device. The tuning/equalizer application can be used by the user/listener to optimize volume input levels to the earphone(s).

[0023] The Dialog Semiconductor DA14585 provides Bluetooth® low energy 5.0 SoC with Audio Interface for use e.g., in remote controls.

[0024] The Qualcomm QCS 403, ACS 404, QCS405 and QCS407 provide Bluetooth processors with audio functions (and in advanced versions, video), including key-phrase detection.

[0025] The Tl CC13x) and CC26xO SimpleLink™ Wireless MCU, e.g., CC2640R2L provides an ARM Cortex-M processor with Bluetooth radio.

[0026] The Ambiq Apollo 2, 3, 4 (and plus versions), e.g., Apollo 3 Blue SoC, provide Bluetooth functionality with always-on keyword spotting or“Voice-on-SPOT (VOS)”. The Vesper VM3011 is a MEMS microphone which draws 10 pA when in “wake on sound” mode. The processor is an ARM Cortex M4 with FPU and BLE 5. The core hardware for a voice command interface consists of a microphone array, and a processor that can receive, and interpret the audio signals from the microphones. Depending on the type of device, various other components may be needed, such as a wireless interface for Blue-tooth Low Energy or Wi-Fi, plus speakers, amplifiers, LEDs, and displays to provide user feedback. See, Paul Beckmann, Aaron Grassian, Matt Crowley, White Paper “Advancing Always-On Voice Command Systems with UltraLow Power Solutions” Ambiq (2020).

[0027] Although it is possible to use a single microphone in a voice command product, most such products use a beamforming array of two to seven microphones to isolate the speaker from ambient noise better. The array allows the audio processor to focus the pickup pattern of the microphones on the user's voice, which improves the signal-to-noise ratio of the user's voice relative to the surrounding environmental noise. However, the demands placed by the form factors of portable and battery-powered products present many challenges not found in products designed to be plugged in. [0028] The DSP Concepts white paper, "Designing Optimized Microphone Beamformers," found that achieving the best possible signal-to-noise ratio is critical to the accuracy, and reliability of a voice command product. The white paper also found that using microphones with tighter sensitivity tolerances could help performance by using microphones with +1 dB tolerance, rather than the more typical ±3 dB. Since each microphone in an array may be in a different acoustical environment, due to the product's physical design, it is better to match the processor's microphone sensitivity rather than in the microphones themselves. DSP Concepts' research demonstrated that increasing the number of microphones improves voice user interface (Ul) reliability. The more closely matched the sensitivity of the microphones are, the better the performance of the beamformer. The more closely matched the sensitivity of the microphones are, the better the performance of the beamformer. The most practical way to achieve this is to balance the microphone sensitivity in the hardware after the microphones are installed. This way, the sensitivity adjustment compensates not only for the differing gain of the mics, typically specified to a precision of ±3 dB, but also for the acoustical effects of the enclosure on the microphones. However, few portable products and almost no wearables have the space for such an array. True wireless earphones, for example, typically have room for only two mics in each earpiece with available microphone spacing of only 10 to 20 mm between the microphone pair. Also, the processing power required for such an array may be beyond the capabilities of the relatively small processors used in most portable devices. Therefore, software algorithms that perform beamforming and other voice Ul optimization functions, must have the capability of being optimized for two or at most three microphones. [0029] A piezoelectric microphone element in the VM3011 , is monitored by a very low-power analog circuit, which monitors and tracks the ambient sound level to activate the system only when a sound is detected above the background noise. A WoS (Wake On Sound) threshold allows optimizing the VM3011 for the best performance in a variety of noise environments. A Passband-filter helps to narrow the selection of specific sounds of interest, between 200 Hz and 8 KHz, to pick up human voices better and reject environmental noises such as machinery rumbles, and wind noise. This circuit is known as "Adaptive ZPL" A single piezoelectric microphone can trigger the microphone array, audio processing circuitry, and Internet connection (if applicable) of a voice command product.

[0030] In any voice-command product, the audio processor— whether a dedicated Digital Signal Processor (DSP) ora processing core within an SoC— must have the necessary computational capability to process the signals from all of the microphones in an array, and to run all of the algorithms necessary for voice recognition. The audio processor must be able to process the signals from all the microphones, and run all the voice-recognition algorithms. The more advanced algorithms and the more microphones the chip can accommodate, the better the signal-to-noise ratio, and the more accurate the voice recognition is. In portable and battery-powered products, however, the processor must also consume as little power as possible to maintain adeguate battery life in the product.

[0031] One processor line, explicitly designed to handle substantial audio processing tasks in battery-powered products with small form factors, and battery power is Ambig's Apollo line. These microcontroller units (MCUs) and systems-on-a- chip (SoCs) are designed using Ambig's SPOT (Subthreshold Power Optimized Technology) platform, which allows them to run on less than 1/1 Oth of the current of a typical audio processor.

[0032] Ambig's SPOT-based Apollo2 is a 48 MHz Arm® Cortex® M4F-based MCU focused on sensor and voice processing that consumes only 10uA/MHz. Apollo2 Blue is available with a Bluetooth Low Energy channel for voice assistants. Apollo3 Blue further lowers power to 6uA/MHz and increases freguency to 96 MHz. Its Blue- tooth Low Energy radio is 5.0 compliant. The Apollo2, Apollo2 Blue, Apollo3 Blue, and Apollo3 Blue Plus processors are capable of handling signals from multi-microphone arrays using DSP Concept's Voice Ul algorithms, making them appropriate for ultra-low-power hearable, wearable, remote control, and other mobile applications. All of these processors have the compact size needed for products such as bands, smartwatches, and earbuds, and they measure from 2.5mm, to 5.3mm sguare depending on the package.

[0033] Beyond the microphone array and audio processor, a voice command product includes additional components. Specific components depend on the application and form factor, but there are a few that almost every voice command product uses. As with the microphones and processors, these components must be chosen not only for their functions and performance, but also for their small size and low power consumption.

[0034] Voice command products typically send and receive data from external servers (by accessing the Internet) to offer additional capabilities. Smart speakers designed for home use connect through Wi-Fi to a LAN. With portable voicecommand products, the connection occurs through Bluetooth or BLE (Bluetooth Low Energy) to a smartphone or tablet, which then connects to the Internet through a cellular data network or Wi-Fi. Voice command products send and receive data from external servers to offer more than the most basic capabilities.

[0035] Most voice command products incorporate some form of user feedback to: confirm that the device is active, heard and understood the user's command correctly, and that it carried out the desired action. These devices can be LEDs, such as the flashing lights atop the Amazon Echo and Google Home smart speakers. They can also be alphanumeric or graphical displays, which may be found on many remotes, and home automation wall panels. [0036] These devices typically have audio feedback as well, which may confirm the user's command through alert tones or voice synthesis— yet another load placed on the processor. The unit employs an amplifier and a speaker of some sort to reproduce the voice, and sometimes alert tones. Some products may even use multiple drivers with a beamforming algorithm to direct the response back at the listener.

[0037] There are many different algorithms at work in always-on voice command products, all of which must be tuned to suit the product's design and application. These algorithms must listen for the wake word 24/7/365, isolate the user's voice from the surrounding noise when a voice is detected, and then produce a clean signal for the wake-word detection engine to recognize the wake word reliably. Typically, a Vesper Adaptive ZPL monitors the signal from a single microphone. When the signal level exceeds a certain threshold— such as when a user speaks the wake word— the microphone sends a signal to power up the rest of the system, it is critical in portable products because it allows the shutdown of other components to save power. When a microphone's signal level exceeds a certain threshold, a comparator sends a command to boot up the rest of the system. This function must also occur guickly so that the system can receive the wake word. For example, with the Vesper microphone, the microphone wakes up within 200 ps, much less than the time it takes to utter the first letter in any keyword. Therefore, no audio buffer is needed.

[0038] To implement voice detection in noisy environments, such as a typical household, Vesper has designed "Adaptive ZPL." The user can program this ultra-low-power analog circuit through I2C. Adaptive ZPL configuration can be changed on the fly, and it is easy to use, therefore integrating with any application processors with available PDM and I2C interfaces. The adaptive ZPL circuit filters out unwanted sounds or noise outside of the user-programmed audio band filter. Meanwhile, it latches a DOUT pin only when a sound is detected above the user-programmed WoS threshold. In the Adaptive ZPL, the WoS threshold continuously traces and follows the RMS background noise with a refresh rate of 0.5 sec/1 sec (also user programmable), which drastically reduces false rejects.

[0039] Because voice command products use multiple microphones, the primary factors in microphone selection for these products are usually size, cost, and quality. However, in portable and battery-powered products, lowering the system power consumption becomes essential. For a microphone array to focus on a user's voice, it must first determine where the user is relative to the product. The microphone array must also include precedence logic that rejects reflections of the user's voice from nearby objects. It must also adjust its operating threshold to compensate for ambient noise level, so environmental noise does not create false directional cues. Determining the direction of arrival may not be necessary for products such as earphones, in which the physical position of the user's mouth relative to the microphone array is already known.

[0040] A microphone array can process the signals from multiple microphones so that the array becomes directional. It accepts sounds coming from the determined direction of arrival while rejecting sounds coming from other directions. With some products, such as earphones and automotive audio systems, the direction of the user's voice relative to the microphone array is known, so the beamformer's direction may be permanently fixed. In devices such as smart speakers, remote controls, and home automation wall panels, the beamformer's desired direction of focus has to be determined, and the response of the array adjusts to focus in the direction of the user.

[0041] Acoustic echo canceling (AEG) rejects the sounds (such as music or announcements) coming from the device itself so that the microphone array can pick up the user's voice more clearly. Because the original signal and the response of the device's internal speaker are known, the device knows to reject the signal that comes back through the microphone. AEG rejects the sounds coming from the device itself so that the microphone array can pick up the user's voice more clearly. Selecting microphones with a high overload point, and minimizing the speaker distortion in the playback path, is crucial to achieving excellent AEC performance. This, in turn, results in a better music barge-in performance, particularly when playing low-freguency content at loud playback. DSP Concepts stereo AEC algorithms cancel out 35 dB of echo during music barge, which results in high wake word detection accuracies and improved user experience.

[0042] AEC is typically not necessary for products such as headphones and earphones because the sound coming from the product's speakers is confined, and typically not enough of it leaks out to affect the performance of the product's microphones. However, in a room, there may be natural echoes from a sound emission that can distort or make intelligibility difficult.

[0043] An Adaptive Interference Canceller (AIC) rejects the interfering sounds, such as a TV playing in the living room, or microwave noise in the kitchen that are hard to cancel out with a traditional beamformer described above. Unlike other adaptive cancellation technigues, DSP Concepts' AIC algorithm does not reguire a reference signal to cancel out the interfering noises. Instead, it uses a combination of beamforming, adaptive signal processing, and machine learning to cancel out 30 dB of interference noise, while also preserving the desired speech signal. AIC is necessary for products, such as remote controls, and smart speakers, that are typically operated in living room environments, where there are interfering noises and moderate to high reverb conditions. For example, when a user is operating a TV remote control, AIC will be able to cancel out the TV sound from the audio stream, to present the speech signal to the wake-word detection engine as if there was no interfering noise present.

[0044] Once the system detects sound and powers up, it must record the incoming audio and compare it to a stored digital file of the wake word (such as "Alexa" for the Amazon Echo). If the waveform of the incoming audio is sufficiently close to the stored file, the device becomes receptive to voice commands. If the waveform of the incoming audio is sufficiently close to the stored file, the device becomes receptive to voice commands. In contrast to portable products, smart home speakers only need to recognize its wake word, as they offload other voice recognition tasks to an external, Internet-connected server. The wake-word detection model typically runs locally on the device, while some service providers such as Amazon, can enable additional wake word checks in the cloud.

[0045] The hearing aid may interface with a standard set of voice services, such as Amazon Alexa, OK Google, Microsoft Cortana, or Samsung Bixby, or implement its own proprietary speech assistant. When interfacing with a standard system, a customized skill is provided or employed to permit specific control over the hearing aid and its environment.

[0046] Because portable products cannot rely on an Internet connection as today's smart speakers do, they need to recognize a certain number of basic function commands on their own without the help of external servers. Portable products need to recognize a certain number of basic function commands without the help of external servers. These commands are typically limited to basic functions such as play, pause, skip tracks, repeat, and answer calls. Recognition of these commands works in the same way as wake-word detection does. However, even though the command set is limited, the need for a local command set recognition increases the load on the processor compared with a smart home speaker. Portable devices, such as wearable headsets, can communicate over a Bluetooth or WiFi link with a mobile phone, which then performs the command processing.

[0047] The function of each of the above algorithms is complex. It must be adjusted to suit the application— especially in portable products, where the environment and use patterns are likely to be different from those of home products. Here are the algorithm functions that need tuning for optimum voice recognition accuracy: [0048] The threshold levels for sound detection and wake-word detection must be set high enough to minimize false triggering of the device, but low enough that the user can address the device at an average speaking level. These wakeup thresholds also depend on the use case. For example, a remote control that is 2-3 feet from the user should be set to a lower threshold, whereas a wearable device has to be set to a higher threshold to reduce false positives. In portable products, especially, it may be desirable for these levels to be adjusted dynamically so that the performance adjusts to compensate for varying levels of ambient sound. The function of the dynamic compensation will itself have to be tuned. The wake threshold must minimize false device triggering but still allow users to address the device at an average speaking level.

[0049] Devices can be tuned to reject different types of noises depending on their application. For example, manufacturers know the spectrum of any given car's road and engine noise at different speeds, so the voice recognition system can be tuned to reject these sounds. The noise reduction or canceling algorithms can also function dynamically by adapting to the changing environment. However, this dynamic function also reguires tuning. Devices with always-on voice command can be tuned to reject different types of noises depending on their application.

[0050] The tighter the beamwidth of the beamformer, the better it rejects environmental sounds, and reflections of the user's voice from other objects. However, setting the beamwidth too tight causes the unit to reject the user's voice if the user moves slightly. The tighter the beamwidth of the beamformer, the better it rejects environmental sounds and reflections of the user's voice. In products such as earphones and headphones, where the direction of arrival of the user's voice does not vary, it can set a tight beamwidth. However, in products such as remote controls and home automation panels, the beamwidth must be set wider to accommodate the movement of the user while the user is speaking.

[0051] A key goal when minimizing power consumption is to put the device to sleep as often as possible, and to keep it asleep for as long as possible. However, this goal reguires trade-offs. If the device is put to sleep too guickly after use, it may miss commands that follow the wake word, and reguire the user to speak the wake word again, which usually leads to frustrated users. However, if the device stays awake longer than necessary, it consumes more power than it needs to. To minimize power consumption, the voice command device should sleep as often as possible, and be kept asleep as long as possible.

[0052] The Voice-on-SPOT reference design combines the Vesper Microphones Wake-on-Sound capability, with the Ambig Apollo MCU running DSP Concepts' TalkTo algorithms.

[0053]

[0054] Therefore, there exists a reguirement for an intercom system for short range domestic use to facilitate members of a family or a group of people, more specifically people with hearing impairments, to enable an effective communication with each other even when present in different rooms or at different floors. Further, there exists a need of hearing aid devices providing effective intercommunication.

[0055] Bluetooth (IEEE-802.15.1) is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances using UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz, and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to 10 meters (30 feet). en.wikipedia.org/wiki/Bluetooth.

[0056] Bluetooth operates at freguencies between 2.402 and 2.480 GHz, or 2.400 and 2.4835 GHz including guard bands 2 MHz wide at the bottom end and 3.5 MHz wide at the top. This is in the globally unlicensed (but not unregulated) industrial, scientific and medical (ISM) 2.4 GHz short-range radio freguency band. Bluetooth uses a radio technology called frequency-hopping spread spectrum. Bluetooth divides transmitted data into packets, and transmits each packet on one of 79 designated Bluetooth channels. Each channel has a bandwidth of 1 MHz. It usually performs 1600 hops per second, with adaptive frequency-hopping (AFH) enabled. Bluetooth Low Energy uses 2 MHz spacing, which accommodates 40 channels.

[0057] Originally, Gaussian frequency-shift keying (GFSK) modulation was the only modulation scheme available. Since the introduction of Bluetooth 2.0+EDR, n/4-DQPSK (differential quadrature phase-shift keying) and 8-DPSK modulation may also be used between compatible devices. Devices functioning with GFSK are said to be operating in basic rate (BR) mode, where an instantaneous bit rate of 1 Mbit/s is possible. The term Enhanced Data Rate (EDR) is used to describe n/4-DPSK (EDR2) and 8-DPSK (EDR3) schemes, each giving 2 and 3 Mbit/s respectively. The combination of these (BR and EDR) modes in Bluetooth radio technology is classified as a BR/EDR radio. In 2019, Apple published an extension called HDR which supports data rates of 4 (HDR4) and 8 (HDR8) Mbit/s using n/4-DQPSK modulation on 4 MHz channels with forward error correction (FEC).

[0058] Bluetooth is a packet-based protocol with a master/slave architecture. One master may communicate with up to seven slaves in a piconet. All devices within a given piconet use the clock provided by the master as the base for packet exchange. The master clock ticks with a period of 312.5 ps, two clock ticks then make up a slot of 625 ps, and two slots make up a slot pair of 1250 ps. In the simple case of single-slot packets, the master transmits in even slots and receives in odd slots. The slave, conversely, receives in even slots and transmits in odd slots. Packets may be 1, 3, or 5 slots long, but in all cases, the master's transmission begins in even slots and the slave's in odd slots.

[0059] The above excludes Bluetooth Low Energy, introduced in the 4.0 specification, which uses the same spectrum but somewhat differently.

[0060] The master BR/EDR Bluetooth device can communicate with a maximum of seven devices in a piconet (an ad hoc computer network using Bluetooth technology), though not all devices reach this maximum. The devices can switch roles, by agreement, and the slave can become the master (for example, a headset initiating a connection to a phone necessarily begins as master— as an initiator of the connection— but may subsequently operate as the slave).

[0061] The Bluetooth Core Specification provides for the connection of two or more piconets to form a scatternet, in which certain devices simultaneously play the master role in one piconet and the slave role in another.

[0062] At any given time, data can be transferred between the master and one other device (except for the little-used broadcast mode). The master chooses which slave device to address; typically, it switches rapidly from one device to another in a round-robin fashion. Since it is the master that chooses which slave to address, whereas a slave is (in theory) supposed to listen in each receive slot, being a master is a lighter burden than being a slave. Being a master of seven slaves is possible; being a slave of more than one master is possible. The specification is vague as to required behavior in scattemets.

[0063] Officially Class 3 radios have a range of up to 1 meter (3 ft), Class 2, most commonly found in mobile devices, 10 meters (33 ft), and Class 1 , primarily for industrial use cases, 100 meters (300 ft). Bluetooth Marketing qualifies that Class 1 range is in most cases 20-30 meters (66-98 ft), and Class 2 range 5-10 meters (16-33 ft). The actual range achieved by a given link will depend on the qualities of the devices at both ends of the link, as well as the air conditions in between, and other factors.

[0064] Most Bluetooth applications are battery-powered Class 2 devices, with little difference in range whether the other end of the link is a Class 1 or Class 2 device as the lower-powered device tends to set the range limit. In some cases, the effective range of the data link can be extended when a Class 2 device is connecting to a Class 1 transceiver with both higher sensitivity and transmission power than a typical Class 2 device. Mostly, however, the Class 1 devices have a similar sensitivity to Class 2 devices. Connecting two Class 1 devices with both high sensitivity and high power can allow ranges far in excess of the typical 100m, depending on the throughput reguired by the application. Some such devices allow open field ranges of up to 1 km and beyond between two similar devices without exceeding legal emission limits. [0065] Bluetooth and Wi-Fi (IEEE 802.11 standards) have some similar applications: setting up networks, printing, or transferring files. Wi-Fi is intended as a replacement for high-speed cabling for general local area network access in work areas or home. This category of applications is sometimes called wireless local area networks (WLAN). Bluetooth was intended for portable eguipment and its applications. The category of applications is outlined as the wireless personal area network (WPAN). Bluetooth is a replacement for cabling in various personally carried applications in any setting and also works for fixed location applications such as smart energy functionality in the home (thermostats, etc.).

[0066] Wi-Fi and Bluetooth are to some extent complementary in their applications and usage. Wi-Fi is usually access point-centered, with an asymmetrical client-server connection with all traffic routed through the access point, while Bluetooth is usually symmetrical, between two Bluetooth devices. Bluetooth serves well in simple applications where two devices need to connect with a minimal configuration like a button press, as in headsets and speakers.

[0067] The Bluetooth SIG completed the Bluetooth Core Specification version 4.0 (called Bluetooth Smart) and has been adopted as of 30 June 2010. It includes Classic Bluetooth, Bluetooth high speed and Bluetooth Low Energy (BLE) protocols. Bluetooth high speed is based on Wi-Fi, and Classic Bluetooth consists of legacy Bluetooth protocols.

[0068] Bluetooth Low Energy, previously known as Wibree, is a subset of Bluetooth v4.0 with an entirely new protocol stack for rapid build-up of simple links. As an alternative to the Bluetooth standard protocols that were introduced in Bluetooth v1.0 to v3.0, it is aimed at very low power applications powered by a coin cell. Chip designs allow for two types of implementation, dual-mode, single-mode and enhanced past versions. Compared to Classic Bluetooth, Bluetooth Low Energy is intended to provide considerably reduced power consumption and cost while maintaining a similar communication range. In terms of lengthening the battery life of Bluetooth devices, BLE represents a significant progression. In a single-mode implementation, only the low energy protocol stack is implemented. In a dual-mode implementation, Bluetooth Smart functionality is integrated into an existing Classic Bluetooth controller.

[0069] Bluetooth 5 provides, for BLE, options that can double the speed (2 Mbit/s burst) at the expense of range, or provide up to four times the range at the expense of data rate.

[0070] Bluetooth 5.3 provides high-level protocols such as the SDP (Protocol used to find other Bluetooth devices within the communication range, also responsible for detecting the function of devices in range), RFCOMM (Protocol used to emulate serial port connections) and TCS (Telephony control protocol) interact with the baseband controller through the L2CAP (Logical Link Control and Adaptation Protocol). The L2CAP protocol is responsible for the segmentation and reassembly of the packets.

[0071] The hardware that makes up the Bluetooth device is made up of, logically, two parts; which may or may not be physically separate. A radio device, responsible for modulating and transmitting the signal; and a digital controller. The digital controller is likely a CPU, one of whose functions is to run a Link Controller; and interfaces with the host device; but some functions may be delegated to hardware. The Link Controller is responsible for the processing of the baseband and the management of ARQ and physical layer FEC protocols. In addition, it handles the transfer functions (both asynchronous and synchronous), audio coding (e.g. SBC (codec)) and data encryption. The CPU of the device is responsible for attending the instructions related to Bluetooth of the host device, in order to simplify its operation. To do this, the CPU runs software called Link Manager that has the function of communicating with other devices through the LMP protocol.

[0072] Bluetooth is defined as a layer protocol architecture consisting of core protocols, cable replacement protocols, telephony control protocols, and adopted protocols. Mandatory protocols for all Bluetooth stacks are LMP, L2CAP and SDP. In addition, devices that communicate with Bluetooth almost universally can use these protocols: HCI and RFCOMM.

[0073] The Link Manager (LM) is the system that manages establishing the connection between devices. It is responsible for the establishment, authentication and configuration of the link. The Link Manager locates other managers and communicates with them via the management protocol of the LMP link. To perform its function as a service provider, the LM uses the services included in the Link Controller (LC). The Link Manager Protocol basically consists of several PDUs (Protocol Data Units) that are sent from one device to another. The following is a list of supported services: [0074] The Host Controller Interface provides a command interface for the controller and for the link manager, which allows access to the hardware status and control registers. This interface provides an access layer for all Bluetooth devices. The HCI layer of the machine exchanges commands and data with the HCI firmware present in the Bluetooth device. One of the most important HCI tasks that must be performed is the automatic discovery of other Bluetooth devices that are within the coverage radius.

[0075] The Logical Link Control and Adaptation Protocol (L2CAP) is used to multiplex multiple logical connections between two devices using different higher level protocols. Provides segmentation and reassembly of on-air packets. In Basic mode, L2CAP provides packets with a payload configurable up to 64 kB, with 672 bytes as the default MTU, and 48 bytes as the minimum mandatory supported MTU. In Retransmission and Flow Control modes, L2CAP can be configured either for isochronous data or reliable data per channel by performing retransmissions and CRC checks. Bluetooth Core Specification Addendum 1 adds two additional L2CAP modes to the core specification. Streaming Mode (SM) is a very simple mode, with no retransmission or flow control. This mode provides an unreliable L2CAP channel. Reliability in any of these modes is optionally and/or additionally guaranteed by the lower layer Bluetooth BDR/EDR air interface by configuring the number of retransmissions and flush timeout (time after which the radio flushes packets). In-order seguencing is guaranteed by the lower layer. Only L2CAP channels configured in ERTM or SM may be operated over AMP logical links.

[0076] The Service Discovery Protocol (SDP) allows a device to discover services offered by other devices, and their associated parameters. For example, when you use a mobile phone with a Bluetooth headset, the phone uses SDP to determine which Bluetooth profiles the headset can use (Headset Profile, Hands Free Profile (HFP), Advanced Audio Distribution Profile (A2DP) etc.) and the protocol multiplexer settings needed for the phone to connect to the headset using each of them. Each service is identified by a Universally Unigue Identifier (UUID), with official services (Bluetooth profiles) assigned a short form UUID (16 bits rather than the full 128).

[0077] Radio Freguency Communications (RFCOMM) is a cable replacement protocol used for generating a virtual serial data stream. RFCOMM provides for binary data transport and emulates EIA-232 (formerly RS-232) control signals over the Bluetooth baseband layer, i.e., it is a serial port emulation. RFCOMM provides a simple, reliable, data stream to the user, similar to TCP. It is used directly by many telephony related profiles as a carrier for AT commands, as well as being a transport layer for OBEX over Bluetooth. Many Bluetooth applications use RFCOMM because of its widespread support and publicly available API on most operating systems. Additionally, applications that used a serial port to communicate can be quickly ported to use RFCOMM.

[0078] The Bluetooth Network Encapsulation Protocol (BNEP) is used for transferring another protocol stack's data via an L2CAP channel. Its main purpose is the transmission of IP packets in the Personal Area Networking Profile. BNEP performs a similar function to SNAP in Wireless LAN.

[0079] The AudioA/ideo Control Transport Protocol (AVCTP) is used by the remote control profile to transfer AV/C commands over an L2CAP channel. The music control buttons on a stereo headset use this protocol to control the music player.

[0080] The AudioA/ideo Distribution Transport Protocol (AVDTP) is used by the advanced audio distribution (A2DP) profile to stream music to stereo headsets over an L2CAP channel intended for video distribution profile in the Bluetooth transmission.

[0081] The Telephony Control Protocol - Binary (TCS BIN) is the bit-oriented protocol that defines the call control signaling for the establishment of voice and data calls between Bluetooth devices. Additionally, "TCS BIN defines mobility management procedures for handling groups of Bluetooth TCS devices."

[0082] Depending on packet type, individual packets may be protected by error correction, either 1/3 rate forward error correction (FEC) or 2/3 rate. In addition, packets with CRC will be retransmitted until acknowledged by automatic repeat request (ARQ).

[0083] Any Bluetooth device in discoverable mode transmits the following information on demand: Device name, Device class, List of services, and Technical information (for example: device features, manufacturer, Bluetooth specification used, clock offset). Any device may perform an inquiry to find other devices to connect to, and any device can be configured to respond to such inquiries. However, if the device trying to connect knows the address of the device, it always responds to direct connection requests and transmits the information shown in the list above if requested. Use of a device's services may require pairing or acceptance by its owner, but the connection itself can be initiated by any device and held until it goes out of range. Some devices can be connected to only one device at a time, and connecting to them prevents them from connecting to other devices and appearing in inquiries until they disconnect from the other device. Every device has a unique 48-bit address. However, these addresses are generally not shown in inquiries. Instead, friendly Bluetooth names are used, which can be set by the user. This name appears when another user scans for devices and in lists of paired devices.

[0084] Many services offered over Bluetooth can expose private data or let a connecting party control the Bluetooth device. Security reasons make it necessary to recognize specific devices, and thus enable control over which devices can connect to a given Bluetooth device. At the same time, it is useful for Bluetooth devices to be able to establish a connection without user intervention (for example, as soon as in range). To resolve this conflict, Bluetooth uses a process called bonding, and a bond is generated through a process called pairing. The pairing process is triggered either by a specific request from a user to generate a bond (for example, the user explicitly requests to "Add a Bluetooth device"), or it is triggered automatically when connecting to a service where (for the first time) the identity of a device is required for security purposes. These two cases are referred to as dedicated bonding and general bonding respectively.

[0085] Pairing often involves some level of user interaction. This user interaction confirms the identity of the devices. When pairing completes, a bond forms between the two devices, enabling those two devices to connect in the future without repeating the pairing process to confirm device identities. When desired, the user can remove the bonding relationship. During pairing, the two devices establish a relationship by creating a shared secret known as a link key. If both devices store the same link key, they are said to be paired or bonded. A device that wants to communicate only with a bonded device can cryptographically authenticate the identity of the other device, ensuring it is the same device it previously paired with. Once a link key is generated, an authenticated Asynchronous Connection-Less (ACL) link between the devices may be encrypted to protect exchanged data against eavesdropping. Users can delete link keys from either device, which removes the bond between the devices— so it is possible for one device to have a stored link key for a device it is no longer paired with. Bluetooth services generally reguire either encryption or authentication and as such reguire pairing before they let a remote device connect. Some services, such as the Object Push Profile, elect not to explicitly reguire authentication or encryption so that pairing does not interfere with the user experience associated with the service use-cases. Pairing mechanisms changed significantly with the introduction of Secure Simple Pairing in Bluetooth v2.1. The following summarizes the pairing mechanisms:

[0086] Legacy pairing is the only method available in Bluetooth v2.0 and before. Each device must enter a PIN code; pairing is only successful if both devices enter the same PIN code. Any 16-byte UTF-8 string may be used as a PIN code; however, not all devices may be capable of entering all possible PIN codes. Limited input devices may have a fixed PIN, for example "0000" or "1234", that are hard-coded into the device.

[0087] Secure Simple Pairing (SSP) is reguired by Bluetooth v2.1, although a Bluetooth v2.1 device may only use legacy pairing to interoperate with a v2.0 or earlier device. Secure Simple Pairing uses a form of public-key cryptography, and some types can help protect against man in the middle, or MITM attacks. SSP has the following authentication mechanisms:

[0088] “Just works” provides pairing with no user interaction. However, a device may prompt the user to confirm the pairing process. This method is typically used by headsets with minimal IO capabilities, and is more secure than the fixed PIN mechanism this limited set of devices uses for legacy pairing. This method provides no man-in-the-middle (MITM) protection.

[0089] If both devices have a display, and at least one can accept a binary yes/no user input, they may use Numeric Comparison. This method displays a 6-digit numeric code on each device. The user should compare the numbers to ensure they are identical. If the comparison succeeds, the user(s) should confirm pairing on the device(s) that can accept an input. This method provides MITM protection, assuming the user confirms on both devices and actually performs the comparison properly.

[0090] Passkey Entry may be used between a device with a display and a device with numeric keypad entry (such as a keyboard), or two devices with numeric keypad entry. In the first case, the display presents a 6-digit numeric code to the user, who then enters the code on the keypad. In the second case, the user of each device enters the same 6-digit number. Both of these cases provide MITM protection.

[0091] The Out of band (OOB) method uses an external means of communication, such as near-field communication (NFC) to exchange some information used in the pairing process. Pairing is completed using the Bluetooth radio, but reguires information from the OOB mechanism. This provides only the level of MITM protection that is present in the OOB mechanism.

[0092] SSP is considered simple for the following reasons: In most cases, it does not reguire a user to generate a passkey. For use cases not requiring MITM protection, user interaction can be eliminated. For numeric comparison, MITM protection can be achieved with a simple eguality comparison by the user. [0093] Using OOB with NFC enables pairing when devices simply get close, rather than requiring a lengthy discovery process.

[0094] Bluetooth uses the radio freguency spectrum in the 2.402 GHz to 2.480 GHz range, which is non-ionizing radiation, of similar bandwidth to the one used by wireless and mobile phones. No specific harm has been demonstrated, even though wireless transmission has been included by IARC in the possible carcinogen list. Maximum power output from a Bluetooth radio is 100 mW for class 1 , 2.5 mW for class 2, and 1 mW for class 3 devices.

[0095] Bluetooth Mesh is a computer mesh networking standard based on Bluetooth Low Energy that allows for many- to-many communication over Bluetooth radio. The Bluetooth Mesh specifications were defined in the Mesh Profile and Mesh Model specifications by the Bluetooth Special Interest Group (Bluetooth SIG). Bluetooth Mesh was conceived in 2014 and adopted on July 13, 2017.

[0096] Bluetooth Mesh is a mesh networking standard that operates on a flood network principle. It's based on the nodes relaying the messages: every relay node that receives a network packet that authenticates against a known network key that is not in message cache, that has a TTL > 2 can be retransmitted with TTL = TTL 1. Message caching is used to prevent relaying messages recently seen. en.wikipedia.org/wiki/Bluetooth_mesh_networking.

[0097] Communication is carried in the messages that may be up to 384 bytes long, when using Segmentation and Reassembly (SAR) mechanism, but most of the messages fit in one segment, that is 11 bytes. Each message starts with an opcode, which may be a single byte (for special messages), 2 bytes (for standard messages), or 3 bytes (for vendorspecific messages).

[0098] Every message has a source and a destination address, determining which devices process messages. Devices publish messages to destinations which can be single things I groups of things I everything. Each message has a seguence number that protects the network against replay attacks. Each message is encrypted and authenticated. Two keys are used to secure messages: (1) network keys - allocated to a single mesh network, (2) application keys - specific for a given application functionality, e.g., turning the light on vs reconfiguring the light. Messages have a time to live (TTL). Each time message is received and retransmitted, TTL is decremented which limits the number of "hops", eliminating endless loops.

[0099] Bluetooth Mesh has a layered architecture, with multiple layers as discussed below.

[0100] The Model Layer defines a standard way to exchange application specific messages. For example, a Light Lightness Model defines an interoperable way to control lightness. There are mandatory models, called Foundation Models, defining states and messages needed to manage a mesh network.

[0101] The Access Layer defines mechanism to ensure that data is transmitted and received in the right context of a model and its associated application keys.

[0102] The Upper Transport Layer defines authenticated encryption of access layer packets using an application (or device specific key). It also defines some control messages to manage Friendship or to notify the behavior of node using Heartbeat messages.

[0103] The Lower Transport Layer defines a reliable (through a Block Acknowledgement) Segmented transmission upper layer packets, when a complete upper layer packet can't be carried in a single network layer packet. It also defines a mechanism to reassemble segments on the receiver. [0104] The Network Layer defines how transport packets are addressed over network to one or more nodes. It defines relay functionality for forwarding messages by a relay node to extended the range. It handles the network layer authenticated encryption using network key.

[0105] The Bearer Layer defines how the network packets are exchanged between nodes. Mesh Profile Specification defines BLE advert bearer and BLE GATT bearer. Mesh Profile defines Prox/ Protocol, through which mesh packets can be exchanged via other bearers like TCP/IP.

[0106] Bluetooth mesh, officially launched in July 2017, is a highly anticipated addition to the Internet of Things (loT) connectivity space. Bluetooth is a widely used short-range technology found in smartphones, tablets and consumer electronics, and the Bluetooth Special Interest Group (SIG) has a strong reputation for delivering specifications and tools that guarantee global, multi-vendor interoperability, www.ericsson.com/en/reports-and-papers/white-papers/bluetoot h- mesh-networking.

[0107] Relaying in a Bluetooth mesh network is based on a managed flooding communication model. With managed flooding, a message injected in the mesh network can potentially be forwarded by multiple relay nodes. This approach offers flexibility in deployment and operation, but has the drawback of high congestion, resulting in packet loss for contention-based access in the unlicensed spectrum. It is therefore important to determine the supported traffic and QoS of Bluetooth mesh.

[0108] Bluetooth mesh provides several ways to configure the network based on the characteristics of the deployment and the reguirements of the application, and the impact of such network configurations scales with the network size and throughput. Examples of configuration options include configuration of the relay feature, use of acknowledged or unacknowledged transmissions, message repetition schemes and transmission randomization.

[0109] To support standardization, validate implementation recommendations and assess the performance of a Bluetooth mesh network comprising of hundreds of devices, we carried out a full stack implementation of the Bluetooth Mesh Profile in a system-level simulator.

[0110] A capillary network is a LAN that uses short-range radio-access technologies to provide groups of devices with wide area connectivity. Capillary networks therefore extend the range of the wide area mobile networks to constraint devices. Figure 1 illustrates the Bluetooth capillary gateway concept. As a capillary radio, Bluetooth standardizes the messages and behaviors of a variety of user scenarios that reguire sensing and/or actuation commands for constraint nodes. The relaying of these commands over multiple hops in the mesh also enables communication between nodes that are not within direct radio reach of each other. The presence of capillary gateways such as smartphones and/or proxy nodes that support both Bluetooth and cellular connectivity in the mesh area network extends the accessibility of extremely low-power, storage and memory constrained devices into the core network up and to the cloud.

[0111] The Bluetooth Mesh Profile builds on the broadcasting of data overthe Bluetooth low-energy advertising channels, as specified in the Bluetooth core specification [2], Since the Bluetooth Mesh Profile is based on Bluetooth Core v4.0 and later versions, it works on existing devices after a simple firmware update.

[0112] All nodes are asynchronously deployed and can talk to each other directly. After provisioning them, the network simply starts working and does not reguire any centralized operation - no coordination is reguired and there is no single point of failure. A group of nodes can be efficiently addressed with a single command, making dissemination and collection of information fast and reliable.

[0113] Bluetooth mesh has several characteristics, including: [0114] The publish/subscribe model: The exchange of data within the mesh network is described as using a publish/subscribe paradigm. Nodes that generate messages publish the messages to an address, and nodes that are interested in receiving the messages will subscribe to such an address. This allows for flexible address assignment and group casting.

[0115] Two-layer security: Messages are authenticated and encrypted using two types of security keys. A network layer key provides security for all communication within a mesh network, and an application key is used to provide confidentiality and authentication of application data sent between the intended devices. The application key makes it possible to use intermediary devices to transmit data. Messages can be authenticated for relay without enabling the intermediary devices to read or change the application data. For example, a light bulb should not be able to unlock doors, even if the unlock command needs to be routed though the light bulb to reach the lock.

[0116] Flooding with restricted relaying: Flooding is the most simple and straightforward way to propagate messages in a network using broadcast. When a device transmits a message, that message may be received by multiple relays that in turn forward it further. Bluetooth mesh includes rules to restrict devices from re-relaying messages that they have recently seen and to prevent messages from being relayed through many hops.

[0117] Power saving with "friendship": Devices that need low-power support can associate themselves with an always- on device that stores and relays messages on their behalf, using the concept known as friendship. Friendship is a special relationship between a low-power node and one neighboring "friend" node. Friendship is first established by the low- power node; once established, the friend node performs actions that help reduce the power consumption on the low- power node. The friend node maintains a cache that stores all incoming messages addressed to the low-power node and delivers those messages to the low-power node when reguested. In addition, the friend node delivers security updates to the low-power node.

[0118] Bluetooth Low Energy Proxy: Some Bluetooth devices such as smartphones may not support the advertising bearer defined by Bluetooth mesh natively. To enable those devices within the mesh network, Bluetooth Mesh Profile specifies a proxy protocol using legacy Bluetooth connectivity, over which mesh messages can be exchanged.

[0119] Seven different protocol layers have been defined, known as the bearer layer, the network layer, the lower transport layer, the upper transport layer, the access layer, the foundation model layer and the model layer.

[0120] Each layer has its own functions and responsibilities, and it provides certain services to the layer above it. Full details on the layers and their functionalities can be found in the Bluetooth Mesh specifications.

[0121] The network and transport layers are essential for network design and deployment strategies. The network layer handles aspects such as the addressing and relaying of messages, as well as network layer encryption and authentication. The lower transport layer handles segmentation and reassembly, and provides acknowledged or unacknowledged transport of messages to the peer device at the receiving end. The upper transport layer encrypts and authenticates access messages, and defines transport control procedures and messages. The latter is used to set up and manage the friendship feature, for example.

[0122] The choice of utilizing acknowledged or unacknowledged transport at the transport layer, and the selection of repetition scheme at the network layer, both represent means of controlling message delivery reliability and should be considered jointly. The reliability of the mesh network also depends on the performance of the lower layers and the utilization ofthefreguency resources used by the advertising channels. [0123] All Bluetooth mesh nodes can transmit and receive messages. In addition, the nodes in a Bluetooth mesh network may support a set of optional features known as the relay, proxy, low-power and friend features. Nodes that support the relay feature can forward messages over the advertising bearer, facilitating communication between mesh nodes that are not within direct radio range. The proxy feature is used to forward messages between the advertising bearer and the GATT bearer (legacy Bluetooth Low Energy connections), so that even nodes with no support for the advertising bearer can be included in the mesh network.

[0124] The low-power feature facilitates low duty cycle operation of constrained mesh nodes. A node that makes use of the low-power feature must always be supported by a friend node, which stores incoming messages to the low-power node. Thanks to the assistance of the friend node, the low-power node can spend most of the time in sleep mode to save the battery.

[0125] With the flooding option, a message injected in the mesh network is potentially forwarded by every relay node that receives it. To avoid loops with infinite retransmissions, the Bluetooth mesh network layer introduces restrictions to the relaying of messages. The network cache method of doing this checks a received message against a cache of previously received messages, and relaying is restricted to messages that are not present in the cache. Time to live (TTL) can also be used to limit the number of times a message can be forwarded. The initial TTL value is determined by the source node, and decremented by one every time the message is relayed. Relay nodes only forward messages with a TTL value greater than one.

[0126] The Bluetooth Mesh Profile specification also allows for message repetitions at the network layer as a way of providing an appropriate level of reliability in the network. For example, when relaying a message, the network layer can be configured to send every network layer packet several times to the bearer layer below. The time between such message repetitions is configurable and should typically include a random component. As a recommended option to enhance the performance of mesh, the source repeats each message three times, whereas relays only retransmit each message once. The intuition behind this enhanced scheme is that the bottleneck of Bluetooth mesh in the considered scenario is often the first hop to inject the packet in the network. Once the neighboring relays receive the packet, there is a high chance that the packet propagates further to the intended destination due to the plurality of alternative paths in the mesh. Repetitions, however, have a negative impact in terms of advertising channel congestion for coexisting Bluetooth devices utilizing the same channels. As a general indication, the density of neighboring relays and the number of retransmissions at each relay need to be considered together to maximize network performance.

[0127] Randomization is the third option. By default, relay nodes scan the advertising channels for messages from neighboring nodes. The initial advertising channel to scan on is selected randomly (among the three advertising channels) and all channels are scanned periodically. When transmitting a message, nodes utilize advertising packets transmitted over all three advertising channels (channels 37, 38 and 39). It is recommended to add a random delay component within an advertising event. The use of a random time within the same advertising event is not explicitly defined by the Bluetooth Core v4.x specifications, and therefore it may not be implemented by some existing chipsets. However, adding a random delay is allowed if the total interval between advertising packets is shorter than 10ms. Adding a random delay decreases the probability of collisions on all channels simultaneously. For mesh, assuming that multiple receiving relays are scanning on different channels, it increases the probability that a packet is propagated in the network.

[0128] Relays should be selected only among the powered nodes. As a best practice, relay nodes can be selected among nodes deployed in open areas, with preference given to corridors, so that every node is well within coverage of at least one relay node and that the network of relay nodes is connected. It is therefore possible to find a path between an arbitrary pair of nodes. Due to the uncertain propagation conditions, it is not always possible to optimize the deployment of the relays, and some redundant designs often need to be considered.

[0129] "Bluetooth Core Specification", versions 5.0, 5.1, 5.2, 5.3 are related documentation, www.bluetooth.org.

SUMMARY OF THE INVENTION

[0130] The present invention relates generally to a hearing aid or hearing aid-like structure adapted to remedy hearing loss in humans, which in addition to performing standard hearing aid functions (e.g., amplification, egualization, voice discrimination/background suppression), supports a wireless intercom communication mode for wirelessly communicating voice between respective hearing aids, e.g., for domestic use between multiple people wearing hearing aids in close proximity. Preferably, the devices automatically communicate between members of a pair, and support various telephony operations, such as conferencing, hold, transfer, forwarding, automated answering, interactive voice response, and the like. The devices may be strictly peers, or have an asymmetric disposition. For example, one hearing aid may be a master, and perform external communication functions, while the other may act as a slave to the master and have more limited functionality (and therefore longer battery life, lower cost, lighter weight, etc.). The hearing aids may form an ad hoc network, and operate in a centralized, decentralized, or hybrid manner.

[0131] The hearing aid preferably also interfaces with typical consumer electronics, such as smart phones, to provide the benefits of hearing aid technology (digital egualization, directional microphone or microphone arrays, noise reduction, feedback suppression, and T-coil.

[0132] In one embodiment, the hearing aid device has a camera, and is configured to acguire lip images of a person speaking. Isolation of the lips may be performed in conjunction with a direction-finding microphone array or Bluetooth direction finding. By synchronizing the sounds in the environment with lip movements of the speaker, background sounds may be suppressed, and artificial intelligence employed to improve comprehensibility. Lipreading may also be of benefit when communicating with a mute participant in a conversation, when watching television with the sound unavailable (such as in a sports bar), or even in a noisy environment where the speaker cannot be heard over ambient noise.

[0133] Further, the hearing aid may include real time speech translation for multilingual conversations. In that case, the hearing aid preferably communicates with a linked smartphone which provides translation services. The smartphone may further offload the translation to a cloud service through WiFi, 4G, 5G, 6G, etc.

[0134] The hearing aid may also provide speech to text conversion, which, for example, can display on a smartphone, electronic display glasses, VR goggles, etc., or simply for archival purposes. The transcript may be used for translation of speech, closed captioning of a video of the conversation, etc. In some cases, the transcript may be transmitted to another person, along with the spoken audio, to assist in comprehension, or simply provide a communication alternate. Consistent with this mixed mode communication, a test-to-speech channel may be provided, to permit one participant to communicate using mainly speech and another participant to participate through mainly text.

[0135] The hearing aid may include inertial sensors, such as accelerometers and gyroscopes, in addition to magnetometers and GPS, directional Bluetooth, etc., which can determine, especially in an environment with other similarly eguipped intercommunicating hearing aids, can permit automatic switching between potential communication partners depending on orientation, distance, and activity, while isolating ambient sounds and conversations between unlikely communication partners. For example, if a number of people form groups in a room, the groups may be automatically detected, based on people facing each other within a distance range of 0-2 meters. On the other hand, persons facing away from the wearer would not typically be considered part of the conversation, even if within the distance criteria. Similarly, if a communication group is formed, a person moving in proximity would not be joined to the conversation unless they decelerate or otherwise appear to insinuate themselves in the group. Likewise, a participant in a conversation may leave the conversation by turning away from the other participants), and accelerate away.

[0136] In another embodiment, the hearing aid is used in a walkie talkie style radio communication system. In that case, freguency egualization to account for hearing loss is less critical, though generally, if the egualization feature is available, it may advantageously be used regardless of degree of hearing loss. Assuming that the devices are power constrained, Bluetooth is a preferred communication protocol, and class 1, 2, or 3 modes may be employed depending on desired range. Alternately, WiFi, UWB, or other protocols may be employed. A multihop mesh network may be implemented to extend effective range. An auxiliary transceiver may also be used to extend range, e.g., a goTenna, LTE, 4G, 5G, 6G, GMRS, FRS, 900 MHZ ISM, DECT radio, or the like. When using an infrastructure-based communication, information routing may be according to IP address, with local address translation (LAT) as needed. For example, if each set of hearing aids is paired with a smartphone, that relays communications through the cellular phone network, then in regions will good cellular coverage, no ad hoc/peer-to-peer communications are reguired. Further, the hearing aid may include an LTE/4G/5G/6G transceiver and directly communicate with a cellular network, alleviating the need for a companion device.

[0137] The hearing aids may support a Bluetooth communication profile linked to a cellphone, though the functions may be implemented without requiri ng a cellphone or other Bluetooth host device. When a Bluetooth host device is provided, the user interface for the system may reside on the cellphone, except perhaps volume controls and simple buttons. When a Bluetooth host is not provided, the hearing aids may employ a user interface based on control buttons, a touch pad interface, voice commands, gestures (e.g., received through an optical sensor such as a camera, lidar, radar, capacitive sensor, or the like). When worn, the preferred user feedback is audible and/or tactile/proprioceptive.

[0138] According to one embodiment, a low power implementation is provided, with a sound trigger activation, which wakes the microcontroller from a sleep state in which functionality other than hearing aid sound processing is limited or absent, and upon hearing a sound, enters a wake mode able to process speech to perform keyword spotting. While the processor is awakened, the radio transceiver is turned on, and the initial communication protocol steps are performed. At the same time, the radio transceiver determines whether it is the target of an addressed communication from another radio. For example, the radio awakens every 2.5-10 seconds to determine whether there is a packet destined for it. A mesh radio architecture is preferably employed, to limit maximum transmission distances, especially in a congested environment. If a keyword is detected, the processor determines what command is requested to be performed, and takes appropriate steps. The microprocessor and radio may reenter a sleep state of no further communication is received for 2.5 seconds. The channel otherwise remains open to avoid communication latencies during a conversation.

[0139] The microcontroller preferably buffers sounds, so that if there is a deferred decision on a command, or for other reasons, the recorded sound may be later processed or transmitted. Therefore, a buffer, e.g., a circular buffer, may be maintained with, e.g., 30 seconds of speech stored. For example, using an adaptive vocoder sampling at 8 kHz, and 8-bit resolution, the 30 second buffer requires 240,000 bytes. Practically, the available memory may be 512 kB. This same memory space may be used for mesh network packet storage before forwarding, and other purposes. Therefore, the microcontroller performs memory management, and prioritization, to ensure that the requirements for highest priority uses are met.

[0140] The present system preferably operates using a hands-free, display-free user interface, and preferably employs an interactive voice interface. The interactive voice interface preferably includes a spoken keyword spotting processor within the hearing aids, which may have e.g., a vocabulary of 8-255 spoken words. The processor preferably employs a micropower microprocessor, e.g., ARM MO, M3, or M33 core (e.g., n RF5340, Rajan, B., Bhavana, B., Anusha, K. R., Kusumanjali, G., & Pavithra, G. S. (2020). loT based Smart and Efficient Hearing Aid using ARM Cortex Microcontroller. 2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE). doi:10.1109/icstcee49637.2020.927711010.1109/ICSTCEE49637.20 20.9277110), optionally with a neural processing accelerator, to implement a convolutional neural network. In general, each hearing aid is symmetric, and therefore each contains a processor. A pair of hearing aids includes a communication channel between the pair, though a single hearing aid may be used. This opens the possibility of shared processing or parallel processing between the two processors, and also the possibility of alternation between the processors in order to balance power consumption. Preferably, the neural network may be trained, i.e., the neural network is not fixed, to allow updating of the keywords.

[0141] For example, the library of keywords may include a set of names or labels, which can be updated depending on the user's address book. In a population with hearing loss, pronunciation may change overtime and become poor, and the keyword recognition system is preferably tuned to that possibility and adaptive to changes. Typically, reprogramming the neural network weights is performed on a smartphone or cloud processing system.

[0142] The system preferably also includes a cognitive assessment mode, to determine when the user is lucid or has diminished cognitive capacity. In the former case, direct control over system operation is preferred. In the latter case, the system preferably imposes filters in order to avoid objectively undesirable or unintended actuation of system functions. The system preferably also has an emergency mode which automatically issues an emergency alert as appropriate.

[0143] The emergency alert may be issued to nearby communication partners, as well as through communications over WiFi (internet) and Bluetooth. For example, a paired cellphone may automatically call 911 , with an automated message or voice communication to the user or another caregiver or responsible party.

[0144] The system may operate similarly to an Amazon Alexa voice assistant, though with some important differences. An amazon Alexa has a single “wake word”, and therefore provides only a single concurrent option for local speech processing. While this may be a technological implementation choice, the issue is deeper. According to the design, after awakening, digitized voice signals are communicated remotely, and basically no activity of an Alexa voice assistance is possible without control by a cloud server. In contrast, the present technology provides a limited vocabulary, and permits local processing and action, in most cases not requiring any remote processing of speech. While remote speech recognition processing in the cloud is not precluded, a basic set of intercom functionality is available in its absence.

[0145] The system operates by processing sounds produced by the wearer, listening for keywords. The keyword may be a recognized name of a person with whom audible communications are frequent. Upon hearing the keyword/name, the system may request confirmation, or proceed directly to opening a communication channel with a device associated with that person. In the simplest case, the other person also wears hearing aids, and therefore the process is symmetric. In other cases, the system initiates contact through another modality, such as through a linked cellphone, DECT, voice over WiFi, VOIP, or the like. [0146] The communication channel may be dropped after a timeout period of no voice communication, or in response to a communication termination command, such as "Bye”. Communication may be reestablished by speaking the name of the person. Similarly, a conference call or group conversation may be established using simple commands, such as “add <name>”.

[0147] When the wireless communication and audible communication are both concurrently available, the system preferably performs echo suppression, for example by subtracting the audible channel from the wirelessly communicated information. In general, the sonic delay of an acoustic transmission will be lower than the packet delay of the wireless communication. However, if the packet delay is longer, then the audible channel may dominate.

[0148] The system has a number of modes of operation, for example, acoustic communication without intercom, direct hearing aid-to-hearing aid wireless communication, communications mediated by Bluetooth hosts (e.g., cellphone), Voice over WiFi, peer-to-peer conferencing; central node intermediated conferencing, etc.

[0149] For example, when two people are adjacent to each other in the same room, an inferred communication mode may be preferred in which all nearby persons are engaged in a group chat, though with an override option to provide private communications. On the other hand, when distances are greater or physical or visual barriers are present, such as people in different rooms, an individually addressed radio freguency communication mode is preferred.

[0150] When line of sight is present between intended targets, acoustic, ultrasonic, optical, infrared, or microwave communications may be employed.

[0151] When a person wearing the device speaks, the sounds are broadcast over a short-range channel to all nearby receivers. A wearer of a receiver may speak a decline command to shut down the channel. The wearer may also speak a command to open a private channel rather than a public one.

[0152] The opening of a private communication channel may be activated by a targeted wake word, representing the name of the person with whom the communication is to be conducted. When the word is spoken by the wearer, a digital processor monitoring the local microphone of the hearing aid recognizes the word (from a limited recognized vocabulary) and determines a target communication partner.

[0153] The distance to the targeted hearing aid may be determined by receive signal strength, proximity to a beacon or transmitter, and/or triangulation. If the distance is short, for example, where each hearing aid is in acoustic communication range as determined by the ability of the microphone of one to sense sounds from the speaker of the other, or based on a Bluetooth received signal strength indicator (RSSI). If the distance is longer, then no direct acoustic communication is available, but the Bluetooth radio freguency communications are available. If no direct Bluetooth is available, then communications may be possible through pairing with respective cellular phones. Automatic handover between modes is preferred.

[0154] The system preferably distinguishes between a wake word pronunciation, and a conversational pronunciation and/or context. Therefore, casual use of the wake word during conversation will not trigger unwanted activation of the system. On the other hand, intentional use of the wake word in proper pronunciation will open communications if the communication partner is not within ear shot, or even conference in multiple parties to form a group conversation. [0155] Cessation of the communication may be triggered by silence for an extended period, or a stop word, such as “goodbye”. Further, to save power, the communication may be intermittent, with the Bluetooth channel reestablished after short breaks without reguiring the wake word to trigger commencement of communication.

[0156] The hearing aids may be binaural, with each ear having a separate but coordinated Bluetooth channel. [0157] According to one aspect, an intercom system made of at least two sets of hearing aids is provided, each set of hearing aids being associated with one user having one or two hearing aids, wherein each hearing aid includes a wireless communication module present to facilitate direct communication between the persons having hearing loss, i.e., an elderly couple or a domestic partner, both with some hearing loss living in the same home to communicate using the wireless communication module present within the hearing aid devices of the system. The wireless communication module may be a Bluetooth module which facilitates pairing and communication between two hearing aid devices using a Bluetooth or Bluetooth Low Energy (BLE) communication protocol over a Bluetooth network.

[0158] The intercom system may include a plurality of hearing aid devices comprising a wireless communication module such as, for example, Bluetooth or a BLE module, to conduct communication between at least two users of the hearing aids directly through the worn hearing aid device.

[0159] The radio transceiver may also be other than Bluetooth (IEEE-802.15.1), for example Zigbee (IEEE-802.15.4), WiFi (IEEE 802.11), Mesh networking for personal area network (IEEE-802.15.5), or body area network (IEEE-802.15.6). While compliance with IEEE-802 standards is not reguired, it facilitates interoperability. The system may also be multimodal, for example including WiFi, Bluetooth and Zigbee. ldapwiki.com/wiki/IEEE%20802; www.rlwireless- world.com/Articles/IEEE_802.html; en.wikipedia.org/wiki/Bluetooth; en.wikipedia.org/wiki/Wi-Fi; en.wikipedia.org/wiki/Zigbee; en.wikipedia.org/wiki/IEEE_802.15

[0160] In some cases, where line of sight between participants is available, optical communications may be employed, for example IrDA (en.wikipedia.org/wiki/lnfrared_Data_Association), etc.

[0161] Proximity of the devices may be determined based on received signal strength, triangulation or trilateralization (or use of more or less than three signals with associated locational ambiguity or overspecification).

[0162] The typical system listens for incoming communications on a Bluetooth or another wakeup signaling channel, e.g., 15-150 kHz. See, AS3933 LF Receiver IC, This permits a standard Bluetooth transmitter Nordic Semiconductor nRF 52832, to wake a receiver, without reguiring the Bluetooth receiver to be on before reception. A nRF5340 or nRF52833 SoC may also be employed. Preferably, BLE direction finding and/or Bluetooth Mesh protocols are supported.

[0163] The architecture therefore provides that a local sound emission from a person triggers a wakeup of a microphone signal processing circuit, and associated downstream processing. In some cases, the microphone signal processing circuit and keyword spotting processor are distinct from the microcontroller managing the radio transceiver, and may be a low power microcontroller such as a low voltage, low clock rate ARM Cortex MO, or an application specified integrated circuit. The keyword spotter may have two stages; a low threshold, e.g., 75% probability of any valid keyword being present, to trigger the next phase of analysis. In the next phase, the main microcontroller wakes up, and the radio is initiated. The main microcontroller then completes the analysis, to determine presence of a specific keyword with high probability, e.g., 92-98% probability discrimination.

[0164] With the main microcontroller active, the system may provide ambiguity resolution for keywords or simple command structures/grammars.

[0165] The wakeup signal for the main microcontroller optionally sends out a beacon for hearing aids of other persons, to awaken them as well, on the presumption that a speech communication is imminent, or that processing or packet forwarding may be reguired imminently. The beacon is typically a short-range communication, since the Bluetooth radio is not awakened yet. After the main microcontroller is awakened, any beacon can likely wait unto the processing routine is completed, and need not be proactive before the need for communication is determined. This is because the main microcontrol ler has significant processing capability. However, in some cases, preemptive beacons are desirable even before a processing result is available. By preemptively awakening the microcontroller early, latencies may be reduced, and coordinated action between hearing aids facilitated. For example, in natural conversation, a wake word or keyword may be part of a human conversation, and this preamble to the conversation needs to be communicated to the communication partner. If the system delays too long in waking up and transmitting the initial words, there will be an unnatural delay in the conversation thereafter. As a result, the system should be gueued up to transmit the initial words (as reguired by system logic) immediately, and in some cases, the stream may be broadcast before a determination is made regarding the reguirement to reproduce the speech.

[0166] The receiving system may therefore receive a beacon signal, which may be a broadcast or addressed communication, and then awaken, including full radio functionality. An audio stream may then be received, which is buffered, pending a confirmation commands to replay the stream. Where a mesh network is reguired to pass the stream to the desired endpoint(s), the beacon and packets of the stream may be distributed as the further reaches of the mesh network come online.

[0167] Because of latencies and power consumption, a preferred architecture employs a set of heterogeneous devices, in which the hearing aids and other ultra-low power devices are spared unnecessary activation in favor of other devices, which may include intelligent switches, routers, hotspots and relays, infrastructure, and the like. Such other devices are likely to have “always-on” radio transceivers and microcontrollers, and therefore permit aggressive power saving protocols in the ultra-low power devices.

[0168] It is noted that in some cases, the “awakening” comprises an increase in clock rate and corresponding system processing capacity. In that case, the wakening latency may be short. In other cases, portions of the integrated circuit may be powered down, and reguire microseconds or milliseconds to power up, stabilize and initialize (as reguired). In other cases, power supply circuits may be started, which may reguire tends of milliseconds or more to stabilize. In the case of a Bluetooth radio, it must listen before broadcast, and take other steps defined by protocol before transmitting data. [0169] Bluetooth radio devices typically support multiple low-power modes, some of which may be proprietary to the device itself. The Windows Bluetooth driver stack reguires that a Bluetooth radio support the following three device power states: Active (DO), Sleep (D2), and Off (D3). Device power management for a Bluetooth radio is expected to operate in a consistent way across all system power states. The Bluetooth radio does not enter a special power management mode when the system enters modern standby. Instead, the Bluetooth radio is transitioned in and out of the Sleep (D2) state based on idle time-outs that are managed by BthPort. To support wake from modern standby on Bluetooth-attached HID input devices, the radio stays in the Sleep (D2) state and is armed forwake. Only paired Bluetooth HID device are allowed to wake the system during modern standby. The Bluetooth radio is expected to have a very low power consumptionless than one milliwatt— in the Sleep (D2) state if no devices are connected through RF links. The power consumption can be expected to vary based on the number of associated devices, the types of those devices, and their activity patterns. See docs.microsoft.com/en-us/windows-hardware/design/device-expe riences/bluetooth-power-management-for- modern-standby-platforms

[0170] The Bluetooth radio must also support the capability to turn off the radio through the radio management user interface. After the Bluetooth radio is turned off through this user interface, the radio is transitioned to the Off (D3) power state, in which it is expected to consume nearly zero watts. [0171] In the DO active state, the Bluetooth radio is actively communicating with an associated device on behalf of an application on the operating system.

[0172] In the Sleep (mostly idle with a low-rate duty cycle) D2 state, the Bluetooth radio is in a low-power state. The system has been paired with a remote Bluetooth device, but there is no connection between the two. That is, the device has been disconnected. The Bluetooth controller must be able to generate a wake signal (to the SoC if the radio is not integrated) when new data arrives from the paired device. Or, the Bluetooth radio has no associations. Or, the Bluetooth radio has an active connection that is idle (no data being sent/received) and the link is in sniff mode. In this state, the average power consumption is < 4 milliwatts, and the exit latency to active is < 100 milliseconds.

[0173] In the Off state D3, the Bluetooth radio is completely off (zero watts) or in a low-power state in which no radio state is preserved. The Bluetooth radio is not capable of generating a wake signal to the SoC in this state. The Bluetooth radio is also not able to emit or receive any radio signals— all RF components are powered off. Power consumption is zero or near zero, but the exit latency to active is < 2 seconds. Note that if the sending radio is “Off’, and the receiving radio is also “Off’, the end-to-end turn-on latency will be <4 seconds, plus the delay in communicating the turn-on command, which cannot be communicated directly through the Bluetooth radio.

[0174] When the radio does not have any associated devices, the device is transitioned to D2 and persists it in that state until the pairing process begins. When the radio is associated with one or more devices, the Bluetooth driver uses an idle time-out to decide when to transition the Bluetooth radio from DO to D2. This algorithm may use the pattern of Bluetooth usage by the operating system and applications to determine when to transition the radio to the D2 state. For example, the radio transitions to D2 several seconds after the last key press on a Bluetooth keyboard if there is no other activity on the Bluetooth radio.

[0175] The state of the Bluetooth radio transmitter is tied directly to the device power state. The radio transmitter is expected to be on when the radio is in the Active (DO) or Sleep (D2) power state. The radio transmitter must be turned off when the radio transitions to the Off (D3) state.

[0176] When the Bluetooth radio is turned on, the radio is transitioned to the Active (DO) state, the radio re-initialized, and then child protocol drivers are re-enumerated. When the radio transitions to Active (DO), any reguired GPIO lines must be toggled as part of the normal DO seguence for the Bluetooth radio.

[0177] As discussed above, the invention provides various out of Bluetooth communication band mechanisms for waking communication partners, either in a broadcast or addressed mode. In the addressed mode, an out of band communication may communicate an 8-32 bit address and header to other nodes of an ad hoc network. The recipient of such a communication extracts a mode indicator, address information, and optionally forwarding information, which is processed without turning on the main microcontroller CPU processor. For example, in a 32 bit 15 kHz transmission, 8+2 bits may be address plus error coding, for each of two addressed targets, 8+2 bits may be protocol/action plus error coding, and the remaining two bits may be part of an emergency response protocol mechanism. The message consumes about 2 milliseconds to transmit. The AS3933 permits a 32-bit wakeup pattern.

[0178] Other out-of-band signals may include an ultrasonic beacon detectable by nearby microphones, optical emission, e.g., IrDA wakeup signal, or the like. Further, the system may remain in the D2 sleep state (and not move into D2 off state), and periodically listen for other nodes on the network. For example, all of the nodes may turn on the Bluetooth radios, and listen to the network every ten seconds for synchronization messages, including which nodes are present, and either explicit (e.g., physical location reference) or implicit (e.g., RSSI, angle of arrival, time of arrival) location information. A node will initialized by listening continuously for over ten seconds, to capture two synchronization periods, and thereafter calibrate its own clock to the actual period used by the other nodes in the network. The node then wakes up shortly before the synchronization messages are exchanged, and sequentially enumerate the nodes of the network within range. Various known mobile ad hoc network protocols may be used to arbitrate the message passing.

[0179] After the messages are passed and if no messages reguire further action, the radios re-enter the D2 sleep state. If a message reguires action by one of the nodes, that node remains active, and either processes the received command, which may, for example, indicate that the node is an intended recipient of a pending communication, or the node may be reguested to forward a packet to a radio which is not in direct communication with a reguestor.

[0180] While this arrangement imposes potentially significant latencies, the base usage may be about 1% of continuous active DO power consumption. Further, this potentially avoids the need for out-of-band signaling.

[0181] In an embodiment, in direct connection of hearing aid devices over the Bluetooth network, the BLE or Bluetooth enabled hearing aid devices are paired together to form a mesh network enabling direct communication between the users of the connected hearing aid devices with each other over the mesh network without the use of a network hub. In an embodiment, the mesh network is partially connected or fully connected mesh topology.

[0182] In an embodiment, the wireless communication module or specifically a Bluetooth module configured within each of the hearing aid devices pairs with a mediator network hub device in a star network, where each of the hearing aid devices behaves as a peripheral node of a network, and pairs with the mediator network hub device working as a central node. The central node or a mediator network hub device creates a Bluetooth or BLE network to facilitate the Bluetooth enabled hearing aid devices to pair to facilitate the users of the hearing aid devices to communicate with each other through the mediator network hub device in star network.

[0183] In an embodiment, each of the plurality of hearing aid devices may include their separate Bluetooth enabled mediator network hub device paired with each of the Plurality of hearing aid devices. The mediator network hub device paired with the first hearing aid device connects with at least one other mediator network hub device paired with at least one second hearing aid device to facilitate a communication between the first and the at least one second hearing aid device. In an embodiment, the mediator network hub device paired with the first hearing aid device connects with the at least one other mediator network hub devices over a Bluetooth or a BLE network. The mediator network hub devices connects the hearing aid devices with the BLE network to facilitate wireless communication between at least two sets of hearing aid devices of the system.

[0184] In an embodiment, the intercommunication system includes a first hearing aid device and at least one second hearing device. The first hearing aid device comprises a first bone conducting microphone with a first data conversion module, a first Bluetooth or BLE module, a first amplifier and a first speaker. The at least one second hearing aid device comprises a second Bluetooth or BLE module, a second bone conducting microphone with a second data conversion module, a second amplifier and a second speaker. In an embodiment, the first data conversion module and the second data conversion module both is a combination of an Analog to Digital Converter (ADC) and Digital to Analog Converter (DAC).

[0185] According to one aspect, a method of working of present intercommunication system where the first bone conducting microphone with the first data conversion module catches the sound signal transmitting through the skull bones of the first user and digitizes said sound signal that can be transmitted over a BLE or Bluetooth network without distorting, where the bone conducting microphone eliminates the background environmental noise and ambient sound. The first Bluetooth or BLE module configured within the first hearing device transmits said digitized sound signals to at least one second hearing aid device coupled with the first hearing aid device over a BLE or Bluetooth network. When the digitized sound data from the first hearing aid device is received by the second hearing aid device using the second Bluetooth or BLE module, the second microphone with the second data conversion module configured within the second hearing aid device re-converts said digitized sound data into the analog sound signal which is then amplified by the second amplifier and emitted by the second speaker of the second hearing aid device into the ears of the second user. [0186] In an embodiment, a hearing aid is provided, comprising: an input port configured to receive a signal representing sounds; an output port configured to output an electrical signal representing acoustic waves; a wireless transceiver, configured to bidirectionally communicate audio signals; and a digital processor configured to: receive an audio signal from the wireless transceiver; define the output electrical signal based on the signal from the input port, the communicated audio signals, and an audio egualization profile; and implement a speech controlled user interface, configured to select a counterparty wireless transceiver for communication through the wireless transceiver from a plurality of counterparty wireless transceivers based on a spoken command.

[0187] The hearing aid may further comprise a housing, wherein the wireless transceiver, digital processor and a self- contained power source are contained within the housing. The hearing aid may further comprise a microphone configured to produce the signal representing sounds and a speaker configured to generate the acoustic waves, the microphone and the speaker being contained within the housing, wherein the housing is an intraaural housing, and the microphone comprises a bone conduction microphone.

[0188] The hearing aid may further comprise at least one sensor selected from the group consisting of an accelerometer, a gyroscope, an absolute position sensor, and a relative position sensor, wherein the digital processor is further configured to select the counterparty dependent on a signal from the sensor.

[0189] The wireless transceiver may comprise at least one of a Bluetooth transceiver and a Bluetooth Low Energy transceiver configured to implement a mesh network.

[0190] The digital processor may be further configured to perform keyword spotting from among a plurality of predetermined keywords.

[0191] The digital processor may be further configured to perform keyword spotting using a convolutional neural network.

[0192] The spoken command may comprise a human name.

[0193] The hearing aid may further comprise at least one mediator network hub device communicatively coupled with the wireless transceiver, configured to facilitate communications between the wireless transceiver and a respective wireless transceiver of another hearing aid. The wireless transceiver may be configured to communicate through a Bluetooth mesh network.

[0194] It is another embodiment to provide a Bluetooth enabled intercom system, comprising: and at least one mediator network hub device communicatively coupled with a Bluetooth or BLE network interface; a plurality of Bluetooth enabled hearing aid devices, each comprising a Bluetooth module, a microphone with a data conversion module, an amplifier and a speaker.

[0195] The microphone may comprise a bone conduction microphone. [0196] At least two Bluetooth enabled hearing aid devices may be communicatively coupled through the Bluetooth or BLE network. The at least two Bluetooth enabled hearing aid devices may be communicatively coupled through the at least one mediator network hub device.

[0197] In an embodiment, the first hearing aid device and the second hearing aid device are connected over a BLE network through a mediator network hub device that pairs with the first hearing aid device and the second hearing aid device. The first hearing aid device and a second hearing aid device are each associated with a separate mediator network hub device paired with the respective hearing aid device using a BLE network, where the mediator network hub device connects with other mediator network hub devices associated with at least one other hearing aid device over a Bluetooth, BLE network, 4G/LTE network, 5G network, 6G network, the internet, or Wi-Fi (n/ac/ax) network to facilitate communication between the hearing aid devices.

[0198] It is an embodiment to provide a Bluetooth hearing aid, comprising: a housing; a microphone configured to produce a microphone output signal representing sounds transduced by the microphone; an earphone speaker configured to convert an output electrical signal into acoustic waves; a Bluetooth wireless transceiver, configured to bidirectionally communicate digital information packets; and a digital processor, wherein the microphone, Bluetooth wireless transceiver, and digital processor are each contained within the housing, the digital processor being configured to: receive an audio signal from the Bluetooth wireless transceiver for conversion to the output electrical signal; control an amplitude of the output electrical signal in a freguency-specific manner for conversion to the output electrical signal; implement a keyword spotting process for recognition of at least eight different keywords comprising identification of a plurality of counterparty wireless transceivers; selectively dependent on a recognized keyword initiate a communication through the Bluetooth wireless transceiver with a particular counterparty wireless transceiver or group of counterparty wireless transceivers, and terminate the communication with a terminate spoken command.

[0199] It is also an object to provide a hearing aid, comprising: a microphone configured to produce a microphone output signal representing sounds transduced by the microphone; an earphone speaker configured to convert an output electrical signal into acoustic waves; a wakeup processor, configured to generate a wakeup signal dependent on a sound transduced by the microphone, and a radio freguency signal; and a Bluetooth wireless transceiver module, responsive to the wakeup signal; the Bluetooth wireless transceiver module having an automated processor configured to: bidirectionally stream audio signals; egualize the output electrical signal dependent on a hearing profile; spot a plurality of different keywords; and selectively control a pairing selected from a plurality of possible pairings of the Bluetooth wireless transceiver module dependent on a spotted keyword of the plurality of different keywords.

[0200] An objective of the present disclosure is to provide a hearing aid intercommunication system that may facilitate the partial or complete hearing impaired persons to communicate with each other even when travelling or when away from each other using the Bluetooth network enabled hearing aid devices of present system, where the Bluetooth or BLE protocol facilitates uninterrupted and stable connection and communication between the hearing aid devices.

[0201] Another objective of the present disclosure is to provide a bone conduction microphone configured within each of the hearing aid devices of the system to capture sound data of the users passing through the skull bones of the user preventing introduction of noise or ambient sound within the communication.

[0202] Another object is to provide a Bluetooth or BLE enabled intercommunication system that may be used by the group of bikers riding together to communicate with each other while riding and even with helmet worn, where the bikers can be a person with or without hearing disability. The communication supports groups, individual peer-to-peer communications, and subgroups.

[0203] Various objects, features, aspects and advantages of the inventive subject matter will become more apparent from the following detailed description of preferred embodiments, along with the accompanying drawing figures in which like numerals represent like components.

[0204] It is therefore an object to provide a hearing aid, comprising a housing; a microphone configured to produce a microphone output signal representing sounds transduced by the microphone; a speaker configured to convert an output electrical signal into acoustic waves; a wireless transceiver, configured to bidirectionally communicate audio signals; and a digital processor configured to: receive an audio signal from the wireless transceiver; define the output electrical signal based on the microphone output signal, the received audio signal, and an audio egualization profile; and implement a speech controlled user interface, configured to: select a counterparty wireless transceiver for communication through the wireless transceiver from a plurality of counterparty wireless transceivers based on a selection spoken command, and terminate the communication with a terminate spoken command, wherein the microphone, speaker, wireless transceiver, and digital processor are each within the housing.

[0205] The housing may be an intraaural housing.

[0206] The microphone may comprise a spatial array of microphones.

[0207] The microphone may comprise a bone conduction microphone.

[0208] The speaker may comprise an earphone.

[0209] The wireless transceiver may comprise a Bluetooth or BLE transceiver.

[0210] The wireless transceiver may be configured to implement a mesh network.

[0211] The digital processor may be further configured to perform keyword spotting.

[0212] The keyword spotting may recognize between 5 and 255 different keywords.

[0213] The digital processor may be further configured to perform keyword spotting using a convolutional neural network.

[0214] The hearing aid may further comprise a trigger circuit responsive to the microphone, configured to turn on the digital processor.

[0215] The selection spoken command may comprise a human name.

[0216] It is another object to provide a Bluetooth enabled intercom system, comprising: at least one mediator network hub device communicatively coupled with a Bluetooth or BLE network; and plurality of Bluetooth enabled hearing aid devices communicably coupled with the at least one mediator network hub device to facilitate intercommunication between the plurality of Bluetooth enabled hearing aid devices over the Bluetooth network.

[0217] Each of the plurality of hearing aid devices may further comprise a data conversion module, a bone conduction microphone, an amplifier and a speaker.

[0218] The data conversion module may comprise an Analog to Digital Converter (ADC) and a Digital to Analog Converter (DAC).

[0219] The speaker may be a bone conduction speaker.

[0220] The plurality of Bluetooth enabled hearing aid devices may be communicably coupled with each other through a Bluetooth module enabled in the Bluetooth network. [0221] The at least one mediator network hub device may be selected from the group consisting of a smartphone, a laptop, a wearable smart device, a dedicated network hub device, a personal smart assistant or any other portable network device that may pair at least one of the plurality hearing aid devices with the network.

[0222] The at least one mediator network hub device may be selecgted from the group consisting of a Wi-Fi router or Wi-Fi card, a switch, an internet dongle, any third party domestic smart assistant or any portable wireless network device. [0223] It is another object to provide a Bluetooth enabled intercom system, comprising: a Bluetooth or BLE network; at least one mediator network hub device communicatively coupled with the Bluetooth or BLE network; a first Bluetooth enabled hearing aid device of a first user comprising a first Bluetooth module, a first bone conduction microphone with a first data conversion module, a first amplifier and a first speaker; and at least one second Bluetooth enabled hearing aid device of at least one second user comprising a second Bluetooth module, a second bone conduction microphone with a second data conversion module, a second amplifier and a second speaker.

[0224] The first Bluetooth enabled hearing aid device may be communicatively coupled with the at least one second Bluetooth enabled hearing device over the Bluetooth or BLE network.

[0225] The first Bluetooth enabled hearing aid device and at least one second Bluetooth enabled hearing aid device may be communicatively coupled through the at least one mediator network hub device.

[0226] A sound signal from the first user may be captured by the first bone conduction microphone through a skull bone of the first user and converted into an electrical signal.

[0227] The converted electrical signal may be digitized by the first data conversion module and transmitted to the at least one second Bluetooth enabled hearing aid device enabled within the Bluetooth network.

[0228] The at least one second Bluetooth enabled hearing aid device may be coupled with the network using the second Bluetooth module, receives and re-converts the digitized signal from the first hearing aid device again into an analog audio signal using the second data conversion module, amplifies the re-converted analog audio signal using the second amplifier, and emits an amplified analog audio signal into an ear of at least one second user using the second speaker.

[0229] The first data conversion module and the second data conversion module may be an Analog to Digital Converter (ADC) and a Digital to Analog Converter (DAC) respectively.

[0230] The second speaker may be a bone conduction speaker.

[0231] It is a still further object to provide a method of intercommunication, comprising: receiving and converting into an electrical signal, by a first bone conduction microphone of a first Bluetooth enabled hearing aid device; a sound signal to be transmitted from a first user to at least one second Bluetooth enabled hearing aid device of at least one second user; converting, using a first data conversion module, the electrical signal into a digital data capable of being transmitted over a Bluetooth network; transmitting, by a first Bluetooth module, the digital data to at least one second Bluetooth enabled hearing aid device communicatively coupled within the Bluetooth network; receiving, by a second Bluetooth module of the at least one second Bluetooth enabled hearing aid device, the digital data transmitted by the first Bluetooth enabled hearing aid device; re-converting, by a second data conversion module of the at least one second Bluetooth enabled hearing aid device, received digital data again into an analog sound signal; amplifying, by a second amplifier of the at least one second Bluetooth enabled hearing aid device, the re-converted analog sound signal; and emitting, by a second speaker of the at least one second Bluetooth enabled hearing aid device, the amplified analog sound signal into an ear of the at least one second user. [0232] The first and the at least one second Bluetooth enabled hearing aid device may be communicatively coupled over the Bluetooth network directly using the first Bluetooth module and the second Bluetooth module in a mesh network. The first and the at least one second Bluetooth enabled hearing aid device may be communicatively coupled over the Bluetooth network directly using the first Bluetooth module and the second Bluetooth module in a mesh network.

[0233] The microphone of the first Bluetooth enabled hearing aid device may comprise a bone conduction microphone, and the first Bluetooth enabled hearing aid device is contained within an intraaural housing with a self-contained power supply.

[0234]

[0235] The first and the at least one second Bluetooth enabled hearing aid device may be communicatively coupled over the Bluetooth network through at least one mediator network hub device in a star network.

[0236] The at least one mediator network hub device may pair with the first and the at least one second Bluetooth enabled hearing aid device to communicatively couple them over the Bluetooth network.

[0237] The at least one mediator network hub device may be selected from the group consisting of a smartphone, a laptop, wearable smart device, a dedicated network hub device, a personal smart assistant, a Wi-Fi router or Wi-Fi card, a switch, an internet dongle, any third party domestic smart assistant and any other portable wireless network device.

BRIEF DESCRIPTION OF THE DRAWINGS

[0238] In the figures, similar components and/or features may have the same reference label. Further, various components of the same type may be distinguished by following the reference label with a second label that distinguishes among the similar components. If only the first reference label is used in the specification, the description is applicable to any one of the similar components having the same first reference label irrespective of the second reference label.

[0239] FIG. 1 illustrates one exemplary embodiment of present intercommunication system with smartphones serving as a mediator network hub to connect the hearing aid devices with the network.

[0240] FIG.2 illustrates another exemplary embodiment of present intercommunication system with a dedicated network hub device that connects the hearing aid devices over a Bluetooth/BLE network.

[0241] FIG.3 illustrates one another exemplary embodiment of present intercommunication system where the hearing aid devices are directly connected with each other over a Bluetooth/BLE network eliminating the need of an intermediary hub devices.

[0242] FIG.4 is a block diagram illustrating data transmission between the components of the hearing aid devices and network to facilitate intercommunication over a Bluetooth or BLE network.

[0243] FIG.5 is a flow diagram illustrating a method of hearing aid facilitation using the intercommunication system in accordance with various embodiments of the present disclosure.

[0244] FIG.6 is a schematic drawing illustrating an embodiment of the invention providing a Bluetooth addressable intercom.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0245] In the following description, numerous specific details are set forth in order to provide a thorough understanding of embodiments of the present invention. It will be apparent to one skilled in the art that embodiments of the present invention may be practiced without some of these specific details. [0246] Embodiments of the present invention include various steps, which will be described below. The steps may be performed by hardware components or may be embodied in machine-executable instructions, which may be used to cause a general-purpose or special-purpose processor programmed with the instructions to perform the steps. Alternatively, steps may be performed by a combination of hardware, software, and/or firmware.

[0247] If the specification states a component or feature “may”, “can”, “could”, or “might” be included or have a characteristic, that particular component or feature is not reguired to be included or have the characteristic.

[0248] As used in the description herein and throughout the claims that follow, the meaning of “a,” “an,” and “the” includes plural reference unless the context clearly dictates otherwise. Also, as used in the description herein, the meaning of “in” includes “in” and “on” unless the context clearly dictates otherwise.

[0249] Exemplary embodiments will now be described more fully hereinafter with reference to the accompanying drawings, in which exemplary embodiments are shown. This invention may, however, be embodied in many different forms and should not be constmed as limited to the embodiments set forth herein. These embodiments are provided so that this invention will be thorough and complete and will fully convey the scope of the invention to those of ordinary skill in the art. Moreover, all statements herein reciting embodiments of the invention, as well as specific examples thereof, are intended to encompass both structural and functional eguivalents thereof. Additionally, it is intended that such eguivalents include both currently known eguivalents as well as eguivalents developed in the future (i.e., any elements developed that perform the same function, regardless of structure).

[0250] While embodiments of the present invention have been illustrated and described, it will be clear that the invention is not limited to these embodiments only. Numerous modifications, changes, variations, substitutions, and eguivalents will be apparent to those skilled in the art, without departing from the spirit and scope of the invention, as described in the claim.

[0251] According to one aspect, the present invention generally relates to the Bluetooth enabled intercom system, more particularly the present disclosure provides a system of wireless communication enabled hearing aid devices with an inbuilt communication module to facilitate direct wireless communication transmission between two or more users through devices over a communication network. The communications may be addressed, and targeted based on the address. The address may correspond to a human name. The addressing may be extracted from a voice input, by a process of keyword spotting. The communication module may be a radio freguency communication, infrared or optical communication, ultrasonic communication, or the like. The communication channel is established by spoken commands, preferably continuous keyword spotting from a microphone input, to control establishment of communication channel®, and dropping of those channels are conclusion of a conversation or timeout. The devices are preferably capable of performing hearing aid functions, though the devices may be wireless headphones.

[0252] FIG. 1 depicts one exemplary embodiment of a Bluetooth enabled intercom system 100 that is provided to facilitate the domestic partners with hearing disability living in the same home and sharing living q uarters to communicate with each other directly through their worn hearing aid devices 104 wirelessly by connecting with each other directly or through the mediator network hub device in various topologies, over a Bluetooth enabled network.

[0253] In an aspect, the Bluetooth enabled intercom system 100 is made of a plurality of Bluetooth enabled hearing aid devices 104-1, 104-2.. ,104-N (which are collectively referred to as hearing aid device 104, hereinafter) wearable by multiple partial or complete hearing impaired users or by bikers 102-1, 102-2.. ,102-N (which are collectively referred to as user devices 102, hereinafter) riding in a group to directly communicate with each other through the hearing aid device 104 connected with each other through a BLE network. In an aspect, each of the plurality of hearing aid device 104 is communicatively coupled with other hearing aid devices 104 over a network 108 to facilitate transfer of data and communication between at least two of the plurality of hearing aid devices 104 of present system 100.

[0254] In an aspect, the Bluetooth enabled intercom system 100 of present disclosure may further include a plurality of user devices 106-1, 106-2 ... 106-N (which are collectively referred to as user device 106, hereinafter) which are provided to work as a wireless intermediary hub to get paired with at least one hearing aid device 104-1 using the Bluetooth or BLE network to connect it with at least one other hearing aid devices (104-2...104-N) through the at least one other intermediary hub device or a user device (106-2...102-N) over the network 108. The devices are capable of pairing with a plurality of other devices over Bluetooth or other wireless communication interface. The communication protocol preferably permits opening a communication channel without security, though secure protocols with encryption and key exchange may also be supported. The at least one other user device 106 is further paired according to the communication protocol with at least one other hearing aid device 104 to facilitate communication. The one user device 106 paired with the at least one hearing aid device 104 communicatively couples with the at least one other user devices 106 over a network 108 where the network 108 is a Bluetooth or a BLE network. In an embodiment, the network is an internet, Wi-Fi, GSM network or a CDMA network. The user device 106 can include variety of computing systems including, but not limited to, a smartphone, laptop computer, desktop computer, portable computer, personal digital assistant, handheld device and a wearable smart device such as smart watch.

[0255] In an aspect, each Bluetooth enabled hearing aid device 104 of present system 100 is paired with a separate user device 106 of their respective user 102 using a Bluetooth network that connects the Bluetooth enabled hearing aid device 104 of said user 102 with the network 108 allowing the user 102-1 to communicate with other users (102-2...or 102-N) having a hearing aid devices (104-2,...104-N) communicatively coupled with the network 108 using respective user device (106-2,... 106-N) of the other users (102-2,... 102-N). In an embodimentthe network 108 is also a Bluetooth or BLE network. In another embodiment, the network 108 is a wireless network which can be implemented as one of the different types of networks, such as, the internet, Wi-Fi, LTE network, CDMA network, and the like.

[0256] In another embodiment, each of the hearing aid devices 104 of present system 100 includes a module compatible with the IEEE 802.11 ax standard and Bluetooth 5.1 , 5.2, or 5.3/BLE, and is configured to be paired with respective user device 106 of the user 102 ora base station or hotspot. In an embodiment, the user device 106 is Bluetooth enabled smart phone that pairs with the hearing aid device 104 over a Bluetooth Low Energy (BLE) network. In an embodiment, the user devices 106 may discover available hearing aid devices 104, verify communication connections, identify devices available or compatible for connection service or send broadcast service reguest to one or more user devices.

[0257] The communication module may help discover available hearing aid devices 104 and identify necessary software components, data, or any other devices dependent information or parameters, if any, that need to be uploaded from the Bluetooth module to user device 106 to enable effective pairing or connection. Software components may be, for example, a device driver, an application, a special code or algorithm, an executable object or device dependent data, parameter, information, etc. That is, the tethering device may be useful for establishing the communication protocol, but the communications themselves may be directly between the earpieces without intermediate. The communications may also be between tethering devices, with the earpieces limited to communication with the tethering device. Hybrid technigues are possible. [0258] Where the tethering device is reliably present, speech recognition or keyword spotting may be performed in the tethering device and not the hearing aid or earpiece. In that case, a micropower or nanopower implementation within the hearing aid or earpiece is not reguired, and rather a higher power implementation, such as on a smartphone host processor, may be employed. Further, if the smartphone is reliably connected to a cellular network, then the processing may be further shifted to the cloud or cellular base station processor.

[0259] The user device 106 communicatively couples the hearing aid device 104 with the network 108 to facilitate communication between said hearing aid device 104 and the other hearing aid devices 104 of the system 100 over the network 108. The network 108 includes a Bluetooth Low Energy (BLE) network through which the user devices 106 connect with each other to allow communication between their paired hearing aid devices 104. The user device 106, using the BLE protocol, connects with other user devices 106 in a mesh topology creating a mesh network, and facilitating each of the hearing aid devices 104 paired with the user device 106 to communicate with one or more other hearing aid device 106.

[0260] In an embodiment, each of the Bluetooth enabled hearing aid devices 104 of present system 100 further includes a microphone, which may be, but not limited to, a bone conducting microphone. The hearing aid devices 104 may further include a data conversion module, an amplifier and a speaker. In an embodiment, the data conversion module may upload necessary data from the Bluetooth module to the user device 106 so that the user 102 can output digital content to the speaker of the hearing aid device 104. Hearing aid device 104 may coordinate or manage the voice communication between a user device 106 to send or transmit the audio data to the hearing aid devices 104. In an embodiment, the bone conduction microphone is configured within the first hearing aid device 104-1 receives or captures sound signals of first user 102-1. In an embodiment, the bone conduction microphone is configured adjoining the surface of the hearing aid device coupling with the ear bone when worn by the user 102 allowing the microphone to pick up the sound signals transmitting through the ear bone the user 102 when he or she communicates, eliminating the environmental orbackground noise from entering and admixing with the input sound data. In an embodiment, the collected or captured input sound data and converted into the electrical signals, by the bone conduction microphone, is then converted into a digital signals by the data conversion module of the first hearing aid device 104-1 to avoid distortion during transmission of said digital signals to one or more second hearing aid devices (104-2, ..., or 104-N). In an embodiment, the analog output electrical signal from the bone conduction microphone is converted into a digital signal using the data conversion module to prevent data loss or corruption by maintaining the integrity of the data.

[0261] In an embodiment, the data conversion module present within the at least one second hearing aid device (104- 2, ..., or 104-N) re-converts said digitized sound signal or digital signals again into the analog sound. The amplifier present within the at least one second hearing aid device (104-2, ..., or 104-N) of present system 100 is configured to amplify said analog sound and emit it directly into the ear of the second user using the speaker provided within the at least one second hearing aid device (104-2, ..., or 104-N). In an embodiment, the speaker is a bone conduction speaker that transmits the sound signals as a vibration through the bones of the second user.

[0262] In an embodiment, the present Bluetooth enabled intercommunication system also allows team of two or more hearing impaired users 102 to communicate with each other during travelling or when outside of home or even when they are at completely different locations. In an example, during the adventure trips or when travelling together, the users 102 may communicate directly using their worn hearing aid devices 104 which works as a wireless intercom facilitating wireless communication between the users 102 over a Bluetooth network, where the smartphone or any other smart user device 106 pairs with the hearing aid devices 104 using Bluetooth network to connect the hearing aid devices 104 with the network 108 for communication between two users 102 having a hearing aid devices 104 connected over the network 108. In an embodiment, the network 108 connecting two smart user devices 106 is a Bluetooth ora BLE network or a combination or both classic Bluetooth and BLE. In an embodiment, the network 108 is an internet or IrDA, home RE, HiperLan2, Wi-Fi or GSM network or CDMA network, wherein Wi-Fi is of any standard including 802.11ax, 802.11ac, 802.11 n, 802.11g, 802.11b and 802.11a. In an embodiment, the hearing aid devices 104 may include a radio adapter implemented to enable data/voice transmission among devices through radio links. A RF transceiver coupled with antenna is used to receive and transmit radio freguency signals. The RF transceiver may also convert radio signals into and from electronic signals. The RF transceiver is connected to an interface which may perform analog-to-digital conversion, digital- to-analog conversion, mod ulation/demodulation and other data conversion functions.

[0263] FIG.2 depicts one another exemplary embodiment of present Bluetooth enabled intercom system 200. In an aspect, the intercom system 200 may include a Bluetooth enabled dedicated wireless network hub device 210 that creates a Bluetooth network for one or more hearing aid devices 204 to get paired with and communicate over said created Bluetooth network. In an embodiment, the dedicated wireless network hub device uses Bluetooth Low Energy (BLE) or Bluetooth protocol to create a Bluetooth network and to allow multiple Bluetooth enabled hearing aid devices 204 of different users 201 to couple and communicate with each other. In an embodiment, two or more Bluetooth enabled hearing aid devices 204 of present system 200 are communicably coupled with the dedicated mediator network hub device 210 over the Bluetooth network created by the mediator network hub device 210 in a network topology, such as, but not limited to, star topology, bus topology, etc. In a star network topology, the mediator network hub device 210 is a central node and each of the one or more hearing aid devices 204 are peripheral nodes. In an embodiment, the communication from transmitting node or transmitting hearing aid device 204 first reaches to the central node or a mediator network hub device 210 from where is reaches to one or more receiving peripheral nodes or one or more other hearing aid devices 204 connected with the mediator network hub device 210 in a star topology network.

[0264] In one aspect, the dedicated wireless network hub device 210 may be a third party domestic smart assistant such as Amazon Echo or Google Assistant that may create a Bluetooth network to allow connection between the hearing aid devices (201-1 and 201-2) of present system 200 with the created Bluetooth network and hence with each other when the users (102-1 and 102-2) are at home or when the system 200 is being used by a domestic partners sharing the same living quarters. In an embodiment, the dedicated wireless network hub device 210 is a compact portable device that allows easy carrying by the users 202 while travelling orwhileoutofthe living quarter for any possible reason. In one aspect, the dedicated wireless network hub device 210 is any of, but not limited to, a Wi-Fi router or Wi-Fi card, switch, Hotspot, or any other portable wireless network device.

[0265] FIG. 3 show one another exemplary embodiment of present intercommunication system 300 where the hearing aid devices 304 are directly connected with each other eliminating the need of an intermediary hub devices.

[0266] In an aspect, at least one of a plurality of hearing aid device 304 of the system 300 may create a wireless network behaving as a network host device to which other nearby hearing aid devices (304-2...304-N) of a system 300 present within a specific range around the network host hearing aid device 304-1 which may connect to communicate with the network host hearing aid device 304-1 as well as with each other over a network created by the network host hearing aid device 304-1. [0267] In an aspect, multiple hearing aid devices 304 are connected using Bluetooth modules configured within each of the plurality of hearing aid device 304, where the Bluetooth module may use any of the subprotocol within the Bluetooth standard.

[0268] In an embodiment, one of the hearing aid devices 304-1 involved in the communication may create a network using the Bluetooth module working as a network host device thus eliminating need of an intermediate hub device or any other smart user device. In an embodiment, the at least one other hearing aid device (304-2...304-N) may directly connect the network created by the first hearing aid device 304-1 using their respective Bluetooth modules configured within each of the other hearing aid devices (304-2, ..., 304-N). In an embodiment, one hearing aid device 304-1 may communicate and exchange information with Bluetooth module as part of negotiating the output services to be provided. As an example, in communication with Bluetooth module, the hearing aid device 304-1 may inform the information apparatus 100 of the make, model, identification, version, type of input language, type of device driver software, type of services provided, type of components available for data communication, etc. for a selected hearing aid device 304 such as a bone conducting microphone and speaker. As another example, Bluetooth module may send one or more messages to hearing aid device 304 inguiring about what software component or data, if any, the Bluetooth module needs to upload to enable output to a specific type of hearing aid device 304.

[0269] The communication between the Bluetooth module and the hearing aid devices 304 is secured using authentication and data encryption. Authentication is used to prevent unwanted access to services, while encryption is used to prevent eavesdropping. Security procedures may be implemented by software, hardware or a combination of both, in various steps and stages of communication between the hearing aid devices 304 or Bluetooth modules and the network 108.

[0270] In an embodiment, the method of working of the present intercommunication system 300 where the first hearing aid device 304-1 of the first user 302-1 working as a network host creates a Bluetooth network and facilitates at least one secondary hearing aid device (304-2, ..., 304-N) of at least one second user (302-2, ..., 302-N) present within the Bluetooth network range of the first hearing device 304-1 to pair with the first hearing aid device 304-1 for communication transmission over said created Bluetooth network. The first hearing aid device 304-1 and at least one second hearing aid devices (304-2, ..., 304-N) connected directly with each other forming a mesh network or in mesh topology to enable direct communication between the first hearing aid device 304-1 and at least one second hearing aid devices (304-2, ..., 304-N) or between two or more second hearing aid devices over a BLE mesh network.

[0271] Although in various embodiments, the implementation of systems is explained with regards to different network devices, those skilled in the art would appreciate that, the system 300 can fully or partially be implemented using any other possible wireless network device and the method of working of the same can be achieved with minor modifications, without departing from the scope of present disclosure.

[0272] FIG.4 shows a block diagram of present Bluetooth enabled intercom system 400 illustrating the data transmission between components of the hearing aid device 404 to facilitate wireless communication between the users over the network 408. In an embodiment, the users may have hearing impairments. In an embodiment, the Bluetooth module 412-1 of the first hearing aid device 404-1 pairs with the intermediate network hub device 410-1 that further connects the first hearing aid device 404-1 with the network 408. In an embodiment, the Bluetooth module 412-1 of the first hearing aid device 404-1 may send a pair reguest to the network hub device 410-1 to connect to the network 408. In an alternate embodiment, the network hub device 410-1 may broadcast a connection reguest to discover available hearing aid devices 404. The network hub device 410-1 exchanges service information with hearing aid devices 404 associated with the Bluetooth module 412-1 in a service negotiation process. The user may then select one or more available hearing aid devices 404 based on the service information provided such as drivers corresponding to a particular hearing aid device model and make. Using a device-dependent or specific driver, the hearing aid device 404 may process input and output content. Various type of audio codec may be used to encode or decode the audio data to enable audio conferencing such as but not limited to, WebRTC, MP3, MP4, 3GP, RTP, etc. that a speaker or a microphone understands (herein referred to as audio data). For example, the audio data may include a hearing aid device specific input format, encoding, protocols or data that can be understood or used by a particular hearing aid device make and model. [0273] In an embodiment, the hearing aid device 404 is synchronized with network hub device 410. After hearing aid device 404 is paired with the network hub device 410, the network hub device 410 identifies the components (software component, data, information or parameters) necessary to enable output or input, the Bluetooth module 412-1 may coordinate with the network hub device 410 to upload to the Bluetooth module 412-1 the components stored in a memory or storage unit of the network hub device 410. In an embodiment, a software component is stored in the memory unit of the Bluetooth module 412-1 as an installation wizard or a user interface to capture a user's preferences for audio output operation. Examples of user preferences may include, without limitation, type of noise cancellation reguired, audio input method and audio quality parameters, default volume adjustments, security information, etc. Additional application software may be installed or upgraded to newer versions in order to, for example, provide additional functionalities or bug fixes.

[0274] The first microphone 416-1 is configured within the first hearing device 404-1 to capture the audio signals from the first user and send it to the first data conversion module 414-1 which converts the analog sound signals in to the digital signals to transmit it to the second hearing aid device 404-2 over the network 408. In an embodiment, the microphone is a bone conduction microphone that converts sound waves of the first user into electrical signals without an environmental noise which is converted into a digital signal by the first data conversion module 414-1.

[0275] In an embodiment, the second hearing aid device 404-2 is coupled with the network 408 using the second Bluetooth module 412-2 configured within it to receive the digital signals transmitted from the first hearing aid device 404- 1. The second data conversion module 412-2 configured within the second hearing device 404-2 re-converts the digital signals again into an analog sound signals which is then amplified by the second amplifier 418-2 before emitting into the ear of the second user through the second speaker 420-2. In an embodiment, in return communication transmission, the second hearing aid device 404-2 switches responsibility by behaving as a transmitting hearing aid device to transmit the communication from the second user, on the other end, the first hearing aid device 404-1 behaves as a receiving hearing aid device to receive the communication and emit it into the ear of the first user to allow two side communication using the hearing aid devices 404 of present system 400. In an aspect, the second speaker 420-2 is a bone conduction speaker to that transmits the sound signals in ear of the second user as a vibration signals directly through the skull bones of the second user.

[0276] FIG.5 shows a flow diagram illustrating a method 500 of hearing aid facilitation in accordance with above explained various embodiments of the present disclosure.

[0277] In an aspect, the method of working 500 of the present Bluetooth enabled intercom system 100 comprising steps of receiving a voice or sound signal from the first user by the first microphone of the first hearing aid device as step 502; digitizing said received sound signal by the first data conversion module and transmitting said digitized data using the first communication module of the first hearing aid device 504; receiving and re-converting said digitized data again into the analog voice signal, by the second communication module and the second data conversion module of the second hearing aid device respectively at step 506; amplification of said re-converted voice signals 508 to increase its intensity; and emission of said amplified voice signals into the ears of the second user at step 510 allowing the wireless communication between the two users using their worn hearing aid devices.

[0278] In an embodiment, the method of working 500, where the first bone conduction microphone configured within the first hearing aid device captures the voice or sound signals of the first user transmitting through his/her bones and provides an output electrical signals which is further converted by the first data conversion module of the first hearing aid device into the digital signals before the transmission over the Bluetooth network. The data conversion module is a combination of both Analog to Digital Converter (ADC) and the Digital to Analog Converter (DAC). In an embodiment, the analog to digital converter (ADC) of the first data conversion module digitizes the analog sound signals of the first user to make it capable of being transmitted to at least one second hearing aid device of the second user using the Bluetooth module.

[0279] In an embodiment, the first hearing aid device and at least one second hearing aid device is communicatively coupled with the network using their respective Bluetooth modules, directly or through any other network hub device. In an embodiment, the first Bluetooth module of the first hearing aid device 104-1 works as a network host to communicatively couple with at least one second hearing aid device directly. In one another embodiment, the first hearing aid device and at least one second hearing aid device is coupled to the network through the intermediate network device. The intermediate network device can be any of, but not limited to, a smartphone, a laptop, wearable smart device, dedicated network hub device, personal smart assistant or any other portable network device that may pair the hearing aid devices with the network.

[0280] In an embodiment, the first hearing aid device transmits the digitized sound signal to the at least one second hearing aid device communicatively coupled with the first hearing aid device over the network. Once received the digitized sound data from the first hearing aid device, the second data conversion module present within the second hearing aid device re-converts said digitized sound data again into the analog sound signal. In an aspect, the second data conversion module is a digital to analog converter (DAC) which converts the digital sound signals to analog sound or voice signals. [0281] It is noted that the hearing aids may communicate using analog communications.

[0282] In an embodiment, the re-converted voice signals are then amplified by the second amplifier present within the at least one second hearing device. The amplified sound is then poured out by the second speaker configured within the second hearing aid device into the ears of the second user.

[0283] FIG.6 shows a schematic diagram of a hearing aid 600 with a Bluetooth radio 606 for implementing a wireless intercom and voice control using a concurrent DNN keyword detector 605. Each hearing aid 600 provides a microphone array 601 to assist in sound localization, for noise reduction and improved signal to noise ratio. A microcontroller 604 controls the system, and provides echo suppression, and assistance in pairing assisted communications. A bone conduction microphone 602 is also provided. The signals from the microphone array 601 and the bone conduction microphone 602 are digitized with a multichannel analog to digital converter 603. The microcontroller 604 stores the received sounds in a buffer memory 607. A Bluetooth radio 606 transceiver is provided, which is persistently paired with another Bluetooth transceiver of a hearing aid inserted into the wearer's other ear. Therefore, the system of a pair of hearing aids provides two arrays of microphones, which can be used to map a sound field around the wearer. The digitized audio from the microphone arrays is then processed by a digital signal processor (software implemented within the microcontroller 604 or discreet) in the respective hearing aid 600, which performs the typical hearing aid functions of freguency egualization and noise suppression, and the sounds amplified with amplifier 608 and fed to a miniature speaker 610. In some cases, the digital processing is performed by one of the hearing aids, while the other processor essentially controls its associated Bluetooth transceiver, to achieve power savings and coordination between the pair of hearing aids. To egualize battery drain, the processing responsibility may periodically shift between right and left channels. On the other hand, the two processors may distribute portions of the processing load as part of a parallel processing system.

[0284] Typically, a bone conduction microphone 602 does not reguire an array configuration (and is treated as a scalar unitary signal), and therefore the system may process a single signal, and ignore one signal from the other hearing aid. It is preferable to have common hardware between the hearing aids in a pair, both for uniformity, and to ensure operation if only one hearing aid is used, or if the battery of one hearing aid is exhausted.

[0285] The system preferably has a wake detector 609 to detect speech from the wearer (and ignore other speech or noises), to which then powers up the microcontroller 604, which may otherwise remain asleep. The microcontroller 604 filters the speech, and passes the filtered speech to a keyword spotting processor comprising a concurrent deep neural network keyword detector 605, wherein the keyword spotting provides for detection of a plurality of different keywords, which upon detection, trigger specific actions or code module execution, in the manner of a vectored interrupt. Further, the system is preferably adaptive, to both speaker (wearer) characteristics, and to new keywords, though the adaptivity may be achieved through a server or tethered device and need not be implemented on the host processor of the hearing aid.

[0286] Another feature that may be included is speech translation and/or reiteration. That is, when the far field microphone array of the hearing aid system hears sounds, in most cases, these are amplified and then reproduced with noise reduction and egualization through the speaker. However, according to one embodiment, the sounds are intercepted, and for example, passed to a speech-to-phoneme or speech-to-text converter, though the intermediate state need not be a discrete result. The phonemes, text or other intermediate are then optionally translated into a different language or dialect, and a speech synthesizer then recreates the speech, in a different language, dialect, and/or pronunciation. For example, persons with hearing disability may gave garbled speech which is difficult for others to understand. However, using an intelligent algorithm the content of the speech may be extracted, and that content reiterated to the user with a more comprehensible voice or pronunciation. Because of the need for a large dictionary and processing, the translation and comprehension process may be offloaded to a paired cellphone, dedicated or shared onpremises processor, or cloud processing resource.

[0287] Similarly, where the external processing resource is available, after waking, the hearing aid may communicate the keyword to the remote processor for keyword spotting or speech recognition. Typically, the speech will precede the waking and linking to the external processing resource, so the audio signals) are buffered locally, and transmitted as data packets (as opposed to a real-time audio stream) to the external processing resource.

[0288] A typical usage of the system is to facilitate one-to-one communications or group conversations between people who may each have hearing impediments, and are wearing compatible hearing aid devices. However, not all participants in the conversation need to be so-eguipped. (Of course, remote intercom would be limited or unavailable, depending on available resources.) When the conversation is between two people, the system preferably blocks out all external noises, and may center each speaker in the other’s soundfield. On the other hand, the system may detect orientation, and accurately spatialize the soundfield so that head movements naturally change hearing. Likewise, in a group chat, each speaker may be distinguished within the soundfield by synthetic location, to isolate sources, or the speakers may be accurately localized based on their physical relationships and head position. The system may automatically determine the mode, or hybridize different modes, based on a speech understanding model, ora spoken keyword/key phrase such as “please repeat?”, indicative of poor comprehension, and an implicit reguest for more intelligible content.

[0289] The communication is initiated, for example, by a spoken reguest. The system may also be operable by a smartphone virtual interface, another remote control (e.g., Bluetooth), control buttons on the hearing aids, or the like. However, for the targeted population and usage, consistently available “zero touch” speech control is preferred. The commands, while nominally not natural language speech, are contextually natural language, and therefore the learning curve will be short, and accidental control an unusual occurrence. In order to further reduce incidence of unintentional control, a consciousness detector may be provided, to avoid sleep or delirium from permitting activation. Further, the keyword spotting is preferably limited to isolated keywords or keyphrases, so that names and words in conversation to not unnecessarily alter system operation.

[0290] The system may include a fall detector, such as an accelerometer or shock sensor, and may further include one or more photoplethysmographic (PPG) sensors to detect blood oxygenation, blood glucose, carbon monoxide, pH, lactic acid, or certain other blood chemistry. An electromyographic sensor or electroencephalographic sensor may also provide brain activity readings. When the hearing aid is inserted in the ear canal, it is possible to pick up EEG signals locally, which may provide sensitive detection of transient ischemic attacks (TIA) or stroke based on asymmetric changes in brainwave pattern. These sensors readings are typically not communicated between the wearer of the hearing aid and a conversant, though in the case of caregivers, this information may be useful and communicated either as data to an app on a smartphone, to a remote server, or verbally communicated.

[0291] When clinical or clinical ly-relevant data is available, the device or a linked device may analyze the data and determine emergency conditions, personalized medicine parameters (such as pharmaceutical dose or timing), maintain a clinical record of readings, etc. Since the hearing aids are typically provided in pairs, each hearing aid may have a set of sensors, providing a basis for sensing asymmetry. For example, in case of a stroke, various parameters may acutely become asymmetric, which can be sensed in real time, perhaps before symptoms are recognized. The asymmetries may be detected within the pair of hearing aids, or in a linked device such as a smartphone.

[0292] According to one embodiment, a switched voice channel communication scheme is implemented. A plurality of people in an environment, and preferably all people in the environment, have communication devices configured as hearing aids. Each person is identified by a unigue identifier, such as a name, nickname, number, or the like. The hearing aids are connected either directly or indirectly to a switching endpoint, which manages authentication, channel assignment, switching, conferencing, etc. The switching endpoint may be a server executing Asterisk, e.g., Asterisk 20.0.1 www.asterisk.org. Each link is encrypted and authenticated with the switching endpoint. One person moves in proximity to another person, and begins talking. The switch identifies both people, their relative proximity, and the initiation of voice communication, and automatically creates a radio freguency voice channel between the two. Because the people are in proximity, the channel as established is a direct peer-to-peer communication channel, and the speech does not need to pass through the switching endpoint, which might consume more power. To cease the communication, one participant may utter a keyword and command, such as “Xiamazen Disconnect” (“Xiamazen” being a nonsense word with uncommon phonemes). In another example, a person seeks to communicate with another person not in physical proximity. The person utters “Xiamazen call Robert”. The wake word “Xiamazen” causes the system to wake, and pass the subsequent arguments to a command processor. The word “call” is interpreted as a command to establish a communication channel. Tge word “Robert” is an argument of the command “call”, and causes a lookup in an address book, and reading of the contact properties, which is, for example, a MAC or other radio address for the hearing aid of a person identified as “Robert”. The switching endpoint then opens a communication channel with Robert, and requests authorization to establish the channel. Robert may accept or decline the communication, such as by saying “OK” or “No”. If the communication is authorized, the communication is then established through the switching endpoint between the caller and Robert. The call passes through the endpoint in this case because the two people are not near each other. In a further example, a group conversation is sought to be established, between a diverse group both in proximity to the caller and distant. In this case, the caller identifies the group by individual or predetermined group characteristics, such as location, role, etc. The switching server determines the endpoint identities of all targets, and sends an invitation to each targeted person. In this case, the invitation is not in the form of a communication channel, but rather digital data that defines an interaction with the targeted person. The targeted person may refuse the communication, accept, or elect other options. If the person accepts, the switching endpoint determines the best type of communication channel: local infrastructure, cellular infrastructure, VOIP, peer-to-peer, multihop ad hoc network, etc. In some cases, a local group peer-to-peer communication is established, which may be encrypted using a group encryption key, which may be distributed by the switching endpoint in the initial invitation or subsequently.

[0293] When personal, private or sensitive data is communicated, it may be sent as an encrypted communication. Typically, a Bluetooth or BLE link is encrypted. However, when communicating over a network, various stages may decrypt and reencrypt the data. This therefore requires a trusted intermediary. However, by encrypting the contents of the communication separate from the communication protocol, the intermediary need not be trusted. Further, cryptolopes and transcryption technologies exist (proxy key cryptography, atomic key cryptography) that allow intermediated communications that avoid required trust in the intermediaries. Similarly, data sent to a repository may be encrypted at rest and in transit.

[0294] When establishing a communication channel between peers, a key exchange protocol, such as a Diffie Helman or the like, may be used to ensure a private communication between endpoints. In a multihop network, the packet header may be unencrypted, while its contents encrypted. In a group communication, A technology similar to IEEE-802.11i may be employed, en .wikipedia.org/wi ki/l EE E_802.11 i-2004. Note that Bluetooth does not typically support WiFi standard group communication security technologies.

[0295] IEEE 802.11 supports multicast and broadcast messages. One example in which multicast. In an infrastructure network (that is, a network using an access point), multicasts are only sent from the access point to the mobile devices. Mobile devices are typically not allowed to send broadcasts directly; however, they can initiate a broadcast by sending the message to the access point, which then broadcasts it on the station's behalf (to both the wireless devices and any attached wired LAN).

[0296] Pairwise keys cannot be used for broadcasts. Each mobile device has a different set of pairwise keys so it would be necessary to send multiple copies of the broadcast, each encrypted differently. Therefore, a separate key hierarchy is maintained specifically for use in encrypting multicasts. This is called the group key hierarchy. Unlike pairwise keys, all the mobile devices and the access point share a single set of group keys. This allows all the stations to decrypt a multicast message sent from the access point. While this solves a problem, it also creates one: how to handle the case in which a mobile station leaves the network. If a mobile device chooses to leave the Wi-Fi LAN, it should notify the access point by sending an IEEE 802.11 disassociate message. When it does this, the access point erases the copy of the pairwise keys for the departing mobile device and stops sending it messages. If the device wants to rejoin later, it must go through the whole key establishment phase from scratch. In the case of the group key, even though the device has left the network, it can still receive and decrypt the multicasts that are sent because it still has a valid group key available. This is not acceptable from a security standpoint; if a device leaves the network, it should no longer be allowed any access at all. [0297] The solution to this problem is to change the group key upon each new session and also when a device leaves the network. Negotiating the pairwise keys is complicated because it starts with no secure connection in place and incurs the risk of attacks from simple snooping to message forgery. Group keys may be deferred until each participant is authenticated and has a pairwise key, and then the secure link used to send the group key value. This provides a significant simplification and means that the actual group key values can be sent directly to each station without concern about interception or modification. Group key distribution is done using EAPOL-Key messages, as for pairwise key. However, only two messages are needed, not four.

[0298] The access point (or designated master) performs the following steps during group key distribution: Create a 256-bit group master key (GMK). Derive the 256-bit group transient key (GTK) from which the group temporal keys are obtained; After each pairwise secure connection is established: Send GTK to mobile device with current seguence number; and Check for acknowledgment of receipt. Because it is necessary to update the group key from time to time, a method is needed to perform the update without causing a break in the service. This would be a problem if the mobile device could store only a single group key because it takes time to go round each device and give them all the new key value. Multiple keys may be stored in the mobile device, allowing forseguenced rekeying. Each transmitted frame carries a 2-bit field called KeylD that specifies which key should be used for decryption. Pairwise keys are sent with a KeylD value of 0. But the other three key storage slots may be used for group key updates.

[0299] Suppose that the current group keys are installed into KeylD 1. When we want to update, the authenticator sends the new key with instructions to put it at KeylD 2. During this key update phase, multicasts are still sent using KeylD 1 until all the attached stations have been informed of the new key. Finally, the authenticator switches over and all the multicasts from then on (until the next key change) are sent with KeylD 2.

[0300] In the case of pairwise keys, the PMK is produced by the upper-layer authentication method (or by using preshared keys). This process doesn't apply for the group keys because the key is not generated per device. However, because the object of the group keys is only to protect messages and not provide authentication, there is no need to tie the key into the identity of any specific device. The access point (or master node) allocates a G K simply by choosing a 256-bit cryptographic-qual ity random number. Once the GMK is selected, it is necessary to derive the group temporal keys. Two keys are reguired: Group Encryption key (128 bits); and Group Integrity key (128 bits). The combination of these two keys forms a 256-bit value, the GTK. This is the value that is sent by the access point to each attached station. The GTK is derived from the GMK by combining with a nonce value and the MAC address of the access point. Given that the GMK is completely random to start with, this is arguably an unnecessary step but it does provide consistency with the pairwise key case.

[0301] If a security server is implemented, the authentication phase is completed using an upper-layer authentication. If successful, this both authenticates the supplicant and authorizes it to join the network. If using a preshared key, authentication is assumed and subseguently verified during the four-way key handshake. Given that the peer-to-peer authentication for encrypted group communications is not fully standardized, it is not necessary to comply with IEEE-802 family protocols.

[0302] Once authorized, the mobile device and access point perform a four-way handshake to generate temporal keys and prove mutual knowledge of the PMK. Finally the access point computes and distributes group keys. Only after all these phases have completed is user data finally allowed to flow between the authenticator and the supplicant. At last, the communications link is open for business and all the keys are available to implement the encryption and protection needed.

[0303] IEEE 802.11 i provides a Robust Security Network (RSN) with a four-way handshake and a group key handshake. These utilize the authentication services and port access control described in IEEE 802.1X to establish and change the appropriate cryptographic keys. The RSN is a security network that only allows the creation of robust security network associations (RSNAs), which are a type of association used by a pair of stations (STAs) if the procedure to establish authentication or association between them includes the 4-Way Handshake. Two RSNA data confidentiality and integrity protocols are provided, TKIP and CCMP.

[0304] The initial authentication process is carried out either using a pre-shared key (PSK), or following an EAP exchange through 802.1X (known as EAPOL, which reguires the presence of an authentication server, though a lightweight server may be implemented within the network). This process ensures that the client station (STA) is authenticated with the access point (AP). Note that the protocol may be modified to permit decentralized and/or infrastructureless communication modes. After the PSK or 802.1X authentication, a shared secret key is generated, called the Pairwise Master Key (PMK). In PSK authentication, the PMK is actually the PSK, which may be derived from another key such as a WiFi password by putting it through a key derivation function that uses SHA-1 as the cryptographic hash function. If an 802.1X EAP exchange was carried out, the PMK is derived from the EAP parameters provided by the authentication server.

[0305] The four-way handshake is designed so that the access point (or authenticator) and wireless client (or supplicant) can independently prove to each other that they know the PSK/PMK, without ever disclosing the key. Instead of disclosing the key, the access point (AP) and client encrypt messages to each other— that can only be decrypted by using the PMK that they already share— and if decryption of the messages was successful, this proves knowledge of the PMK. This same scheme may be employed to provide secure peer-to-peer communications between ad hoc nodes, with one node taking the place of the access point. SSL style security may also be employed.

[0306] The PMK is designed to last the entire session and should be exposed as little as possible; therefore, keys to encrypt the traffic need to be derived. A four-way handshake is used to establish another key called the Pairwise Transient Key (PTK). The PTK is generated by concatenating the following attributes: PMK, AP nonce (ANonce), STA nonce (SNonce), AP MAC address, and STA MAC address. The product is then put through a pseudo-random function. The handshake also yields the GTK (Group Temporal Key), used to decrypt multicast and broadcast traffic.

[0307] The actual messages exchanged during the handshake are depicted in the figure and explained below (all messages are sent as EAPOL-Key frames): The AP sends a nonce-value (ANonce) to the STA together with a Key Replay Counter, which is a number that is used to match each pair of messages sent, and discard replayed messages. The STA now has all the attributes to construct the PTK. The STA sends its own nonce-value (SNonce) to the AP together with a Message Integrity Code (MIC), including authentication, which is really a Message Authentication and Integrity Code (MAIC), and the Key Replay Counter which will be the same as Message 1 , to allow AP to match the right Message 1. The AP verifies Message 2, by checking MIC, RSN, ANonce and Key Replay Counter Field, and if valid constructs and sends the GTKwith another MIC. The STA verifies Message 3, by checking MIC and Key Replay Counter Field, and if valid sends a confirmation to the AP.

[0308] The Pairwise Transient Key (64 bytes) is divided into five separate keys: 16 bytes of EAPOL-Key Confirmation Key (KCK) - Used to compute MIC on WPA EAPOL Key message. 16 bytes of EAPOL-Key Encryption Key (KEK) - AP uses this key to encrypt additional data sent (in the 'Key Data' field) to the client (for example, the RSN IE or the GTK). 16 bytes of Temporal Key (TK) - Used to encrypt/decrypt Unicast data packets. 8 bytes of Michael MIC Authenticator Tx Key - Used to compute MIC on unicast data packets transmitted by the AP. 8 bytes of Michael MIC Authenticator Rx Key - Used to compute MIC on unicast data packets transmitted by the station

[0309] The Group Temporal Key (32 bytes) is divided into three separate keys: 16 bytes of Group Temporal Encryption Key - used to encrypt/decrypt Multicast and Broadcast data packets. 8 bytes of Michael MIC Authenticator Tx Key - used to compute MIC on Multicast and Broadcast packets transmitted by AP. 8 bytes of Michael MIC Authenticator Rx Key - currently unused as stations do not send multicast traffic. The Michael MIC Authenticator Tx/Rx Keys in both the PTK and GTK are only used if the network is using TKIP to encrypt the data.

[0310] The Group Temporal Key (GTK) used in the network may need to be updated due to the expiration of a preset timer. When a device leaves the network, the GTK also needs to be updated. This is to prevent the device from receiving any more multicast or broadcast messages from the AP. To handle the updating, 802.11 i defines a Group Key Handshake that consists of a two-way handshake: The AP sends the new GTK to each STA in the network. The GTK is encrypted using the KEK assigned to that STA, and protects the data from tampering, by use of a MIC. The STA acknowledges the new GTK and replies to the AP.

[0311] CCMP is based on the Counter with CBC-MAC (CCM) mode of the AES encryption algorithm. CCM combines CTR for confidentiality and CBC-MAC for authentication and integrity. CCM protects the integrity of both the MPDU Data field and selected portions of the IEEE 802.11 MPDU header.

[0312] RSNA defines two key hierarchies: Pairwise key hierarchy, to protect unicast traffic; and GTK, a hierarchy consisting of a single key to protect multicast and broadcast traffic. The description of the key hierarchies uses the following two functions: L(Str, F, L) - From Str starting from the left, extract bits F through F+L-1. PRF-n - Pseudorandom function producing n bits of output, there are the 128, 192, 256, 384 and 512 versions, each of these output these number of bits. The pairwise key hierarchy utilizes PRF-384 or PRF-512 to derive session-specific keys from a PMK, generating a PTK, which gets partitioned into a KCK and a KEK plus all the temporal keys used by the MAC to protect unicast communication.

[0313] The GTK is be a random number which also gets generated by using PRF-n, usually PRF-128 or PRF-256, in this model, the group key hierarchy takes a GMK (Group Master Key) and generates a GTK.

[0314] "The Protected Frame field is 1 bit in length. The Protected Frame field is set to 1 if the Frame Body field contains information that has been processed by a cryptographic encapsulation algorithm. The Protected Frame field is set to 1 only within data frames of type Data and within management frames of type Management, subtype Authentication. The Protected Frame field is set to 0 in all other frames. When the bit Protected Frame field is set to 1 in a data frame, the Frame Body field is protected utilizing the cryptographic encapsulation algorithm and expanded as defined in Clause 8. Only WEP is allowed as the cryptographic encapsulation algorithm for management frames of subtype Authentication." [0315] IEEE 802.111-2004: Amendment 6: Medium Access Control (MAC) Security Enhancements (PDF), IEEE Standards, 2004-07-23.

[0316] "IEEE 802.11-2007: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications". IEEE. 2007-03-08.

[0317] "The Evolution of 802.11 Wireless Security" (PDF). ITFFROC. 2010-04-18.

[0318] It will be appreciated by those of ordinary skill in the art that the diagrams, schematics, illustrations, and the like represent conceptual views or processes illustrating systems and methods embodying this invention. The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing associated software.

[0319] As used herein in this document, the terms "coupled to" and "coupled with" are also used euphemistically to mean “communicatively coupled with” over a network, where two or more devices are able to exchange data with each other over the network, possibly via one or more intermediary device.

[0320] It should be apparent to those skilled in the art that many more modifications besides those already described are possible without departing from the inventive concepts herein. The inventive subject matter, therefore, is not to be restricted except in the spirit of the appended claims. Moreover, in interpreting both the specification and the claims, all terms should be interpreted in the broadest possible manner consistent with the context. In particular, the terms “comprises” and “comprising” should be interpreted as referring to elements, components, or steps in a non-exclusive manner, indicating that the referenced elements, components, or steps may be present, or utilized, or combined with other elements, components, or steps that are not expressly referenced. Where the specification claims refers to at least one of something selected from the group consisting of A, B, C .... and N, the text should be interpreted as reguiring only one element from the group, not A plus N, or B plus N, etc.

[0321] While the foregoing describes various embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. The scope of the invention is determined by the claims that follow. The invention is not limited to the described embodiments, versions or examples, which are included to enable a person having ordinary skill in the art to make and use the invention when combined with information and knowledge available to the person having ordinary skill in the art.

[0322] l/We claim: