Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMMUNICATION DEVICE
Document Type and Number:
WIPO Patent Application WO/2018/087567
Kind Code:
A1
Abstract:
A first communication device (5) comprises: a peer-to-peer networking interface (7) arranged to establish a connection between the first communication device (5) and a second communication device via a peer-to-peer network; an audio input device (11) for receiving audio from a first user; and a voice recognition module (13) arranged to initiate establishing the connection between the first communication device (5) and the second communication device based on an audio voice command received from the first user via the audio input device (11). The first communication device (5) further comprises: a communication module (15) arranged to: transmit, via the peer-to- peer networking interface (7) to the second communication device, audio data based on the audio received from the first user via the audio input device (11); and arranged to receive audio data from the second communication device, via the peer-to-peer networking interface (7); and an audio output device (17) for outputting audio based on the received audio data.

Inventors:
GREENBERG DAVID (GB)
TAYLOR CLIVE (GB)
Application Number:
PCT/GB2017/053404
Publication Date:
May 17, 2018
Filing Date:
November 10, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
EARTEX LTD (GB)
International Classes:
H04R5/033; H04M1/724
Foreign References:
US20050239487A12005-10-27
EP1232493A12002-08-21
US20070225049A12007-09-27
US20060178159A12006-08-10
US20060206310A12006-09-14
Other References:
None
Attorney, Agent or Firm:
REES, Alexander et al. (GB)
Download PDF:
Claims:
CLAIMS

1 . A first communication device comprising:

a peer-to-peer networking interface arranged to establish a connection between the first communication device and a second communication device via a peer-to-peer network;

an audio input device for receiving audio from a first user;

a voice recognition module arranged to initiate establishing the connection between the first communication device and the second communication device based on an audio voice command received from the first user via the audio input device;

a communication module arranged to: transmit, via the peer-to-peer networking interface to the second communication device, audio data based on audio received from the first user via the audio input device; and arranged to receive audio data from the second communication device, via the peer-to-peer networking interface; and

an audio output device for outputting audio based on the received audio data.

2. The first communication device according to claim 1 further comprising:

a contact sensor and a control module;

wherein the communication module is configured to be able to operate in a plurality of different modes; and

the control module is arranged to detect whether one of a plurality of pre-defined user- interactions with the contact sensor has occurred; wherein each one of the pre-defined user- interactions is associated with a different mode of the communication module; and

the control module is arranged to activate the mode associated with the detected user- interaction.

3. The first communication device according to claim 2 wherein a first one of the pre-defined user-interactions comprises activating the contact sensor for a first predetermined time period.

4. The first communication device according to claim 3 wherein a second one of the predefined user-interactions comprises activating the contact sensor for a second predetermined time period that is longer than the first predetermined time period.

5. The first communication device according to any one of claims 2 to 4 wherein a third one of the pre-defined user interactions comprises activating the contact sensor multiple times within a third predetermined time period.

6. The first communication device according to claim 5 wherein the third pre-defined user- interactions comprises activating the contact sensor twice within the third predetermined time period.

7. The first communication device according to any one of claims 2 to 6 wherein the communication module is configured to be able to operate in:

a connection-disabled mode in which the communication module is not permitted to transmit or receive audio data to or from the second communication device.

8. The first communication device according to claim 7 wherein in the connection-disabled mode: the peer-to-peer networking interface is not permitted to establish a connection between the first communication device and the second communication device via the peer-to-peer network.

9. The first communication device according to claim 7 or claim 8 wherein in the connection- disabled mode the voice recognition module is deactivated.

10. The first communication device according to any one of claims 2 to 9 wherein the communication module is configured to be able to operate in:

a connection-enabled mode in which the communication module is permitted to transmit or receive audio data to or from the second communication device.

1 1 . The first communication device according to claim 10 wherein in the connection-enabled mode the voice recognition module is only activated in response to a user interaction with the contact sensor.

12. The first communication device according to claim 10 or claim 1 1 wherein in the connection-enabled mode the voice recognition module is arranged to perform at least one action in response to at least one voice command of a first instruction set.

13. The first communication device according to claim 12 wherein the voice recognition module is arranged to request a confirmation from the user before performing the at least one action in response to the at least on voice command; and

if the user provides a confirmation, the voice recognition module proceeds with performing the at least one action.

14. The first communication device according to claim 12 or claim 13 wherein the at least one action performed in response to at least one voice command of first instruction set comprises: establishing a connection with the second communication device; and

the at least one voice command comprises:

inputting audio indicative of a label associated with the second communication device.

15. The first communication device according to claim 14 wherein the label comprises a name for a user associated with the second communication device.

16. The first communication device according to any one of claims 10 to 15 wherein in the connection-enabled mode:

the control module is arranged to activate the voice recognition module, in response to a user interaction with the contact sensor.

17. The first communication device according to any one of claims 2 to 16 wherein the communication module has:

a voice-control mode in which the communication module is permitted to transmit or receive audio data to or from the second communication device;

wherein the voice recognition module is activated when the communication module is in the voice-control mode.

18. The first communication device according to claim 17 wherein in the voice-control mode the voice recognition module is arranged to perform a plurality of actions each in response to at least one voice command of a second instruction set.

19. The first communication device according to claim 12 and claim 18 wherein the second instruction set comprises a greater number of voice commands than the first instruction set.

20. The first communication device according to claim 18 or claim 19 wherein the voice recognition module is arranged to request a confirmation from the user before performing the at least one action in response to the at least on voice command; and

if the user provides a confirmation, the voice recognition module proceeds with performing the at least one action.

21 . The first communication device according to any one of claims 2 to 20 further comprising: a sensor arranged to determine that the communication device has not been mounted on an ear of a user; and, in response, ignoring user interactions with the contact sensor.

22. The first communication device according to any one of the preceding claims wherein there is only one contact sensor for activating the modes at the communication module.

23. The first communication device according to any one of the preceding claims comprising only one contact sensor.

24. The first communication device according to any one of the preceding claims wherein the audio input device comprises an in-ear microphone.

25. The first communication device according to claim 24 wherein the voice recognition module is arranged to takes into account frequency modification by the occlusion effect of bone material.

26. The first communication device according to any one of the preceding claims wherein the communication device has a beacon mode in which the first communication device alternates between an active state and a dormant state, where a greater amount of the functionality of first communication device is activated in the active state than in the dormant state.

27. The first communication device according to any one of the preceding claims wherein the second communication device has a plurality of modes; and the communication module of the first communication device has an override mode in which the first communication device is able to transmit audio data for output of audio based on the audio data at the second communication device irrespective of the mode activated at the second communication device.

28. A communication system comprising:

a plurality of the communication devices of any one of the preceding claims connected to one another via a peer-to-peer network.

29. The communication system according to claim 28 further comprising one or more fixed nodes connected to the peer-to-peer network.

30. The communication system according to claim 28 or claim 29 wherein the plurality of communication devices are connected together via a plurality of peer-to-peer networks, wherein the plurality of peer-to-peer networks are connected together via a network infrastructure,

31 . A method comprising:

receiving audio from a first user via an audio input device at a first communication device; initiating establishing a connection between the first communication device and a second communication device via a peer-to-peer network, based on an audio voice command received from the first user via the audio input device;

transmitting, via the peer-to-peer network to the second communication device, audio data based on audio received from the first user via the audio input device;

receiving at the first communication device audio data from the second communication device via the peer-to-peer network; and

outputting audio based on the received audio data via an audio output device at the first communication device.

32. The method according to claim 31 further comprising:

detecting whether one of a plurality of pre-defined user-interactions with a contact sensor has occurred; wherein each one of the pre-defined user-interactions is associated with a different mode; and activating the mode associated with the detected user-interaction.

33. The method according to claim 32 further comprising:

not permitting transmitting or receiving audio data to or from the second communication device when the first communication device is in a connection-disabled mode.

34. The method according to claim 33 further comprising:

not permitting establishing a connection with the second communication device via the peer-to-peer network when the first communication device is in a connection-disabled mode.

35. The method according to claim 33 or claim 34 further comprising:

deactivating voice activation when in the first communication device is in the connection- disabled mode.

36. The method according to any one of claims 32 to 35 further comprising:

permitting transmitting or receiving audio data to or from the second communication device when the first communication device is in a connection-enabled mode.

37. The method according to claim 36 further comprising:

activating voice recognition only in response to a user interaction with the contact sensor when in the connection-enabled mode.

38. The method according to claim 36 or claim 37 further comprising:

performing at least one action in response to at least one voice command of a first instruction set when in the connection-enabled mode.

39. The method according to claim 38 further comprising:

requesting confirmation from the user before performing the at least one action in response to the at least on voice command; and

performing the at least one action, if the user provides a confirmation.

40. The method according to any one of claims 36 to 39 further comprising:

activating the voice recognition, in response to a user interaction with the contact sensor when in the connection-enabled mode.

41 . The method according to any one of claims 32 to 40 further comprising:

permitting transmitting or receiving audio data to or from the second communication device, when the first communication device is in a voice-control mode; and

activating voice recognition when in the voice-control mode.

42. The method according to any one of claims 32 to 41 further comprising:

performing a plurality of actions each in response to at least one voice command of a second instruction set.

43. The method according to claim 40 and claim 46 wherein the second instruction set comprises a greater number of voice commands than the first instruction set.

44. The method according to claim 42 or claim 43 further comprising:

requesting a confirmation from the user before performing the at least one action in response to the at least on voice command; and

if the user provides a confirmation, performing the at least one action.

45. The method according to any one of claims 3 to 44 further comprising:

alternating between an active state and a dormant state, where a greater amount of the functionality of first communication device is activated in the active state than in the dormant state.

46. The method according to any one of claims 31 to 44 wherein the second communication device is able to operate in a plurality of modes; and

the method further comprises: transmitting audio data from the first communication device, in an override mode, for output at the second communication device irrespective of the mode activated at the second communication device.

47. The method according to any one of claims 31 to 46 wherein:

the establishing a connection comprises establishing a connection between the first communication device and a second communication device via two peer-to-peer networks interconnected via a network infrastructure;

the transmitting comprises transmitting, via the two peer-to-peer networks and the network infrastructure to the second communication device, audio data based on audio received from the first user via the audio input device; and

the receiving comprises receiving at the first communication device audio data from the second communication device via the two peer-to-peer networks and the network infrastructure.

48. A computer program comprising code portions which when loaded and run on a computer cause the computer to execute a method according to any of claims 31 to 47.

Description:
COMMUNICATION DEVICE

TECHNICAL FIELD

This disclosure relates to a communication device, a communication system comprising a plurality of the communication devices and a method of operating a communication device.

BACKGROUND

Communication devices are often bulky, complex and expensive. This can be a problem, in particular, in a workplace environment where there are a number of employees each requiring their own communication device in order to communicate with one another.

A bulky communication device may hinder an employee's ability to go about their work, whilst a complex communication device may be difficult for an employee to use. In addition, if each individual communication device is expensive, then it will become very costly for an employer to equip their entire workforce with communication devices.

Thus, there exists a need to provide a compact, simple and inexpensive communication device.

SUMMARY

This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.

According to one aspect of the invention there is provided a first communication device comprising: a peer-to-peer networking interface arranged to establish a connection between the first communication device and a second communication device via a peer-to-peer network; an audio input device for receiving audio from a first user; a voice recognition module arranged to initiate establishing the connection between the first communication device and the second communication device based on an audio voice command received from the first user via the audio input device; a communication module arranged to: transmit, via the peer-to-peer networking interface to the second communication device, audio data based on audio received from the first user via the audio input device; and arranged to receive audio data from the second communication device, via the peer-to-peer networking interface; and an audio output device for outputting audio based on the received audio data. According to another aspect of the invention there is provided a communication system comprising a plurality of the communication devices described herein connected to one another via a peer-to- peer network.

According to another aspect of the invention there is provided a method comprising: receiving audio from a first user via an audio input device at a first communication device; initiating establishing a connection between the first communication device and a second communication device via a peer-to-peer network, based on an audio voice command received from the first user via the audio input device; transmitting, via the peer-to-peer network to the second communication device, audio data based on audio received from the first user via the audio input device; receiving at the first communication device audio data from the second communication device via the peer- to-peer network; and outputting audio based on the received audio data via an audio output device at the first communication device.

According to another aspect of the invention there is provided a computer program comprising code portions which when loaded and run on a computer cause the computer to execute a method as described herein.

According to one example there is provided a first communication device comprising: a networking interface arranged to establish a connection between the first communication device and a second communication device via a network; an audio input device for receiving audio from a first user; a voice recognition module arranged to initiate establishing the connection between the first communication device and the second communication device based on an audio voice command received from the first user via the audio input device; a communication module arranged to: transmit, via the networking interface to the second communication device, audio data based on audio received from the first user via the audio input device; and arranged to receive audio data from the second communication device, via the networking interface; and an audio output device for outputting audio based on the received audio data.

According to another example there is provided a communication system comprising a plurality of the communication devices described herein connected to one another via a network.

According to another example there is provided a method comprising: receiving audio from a first user via an audio input device at a first communication device; initiating establishing a connection between the first communication device and a second communication device via a network, based on an audio voice command received from the first user via the audio input device; transmitting, via the network to the second communication device, audio data based on audio received from the first user via the audio input device; receiving at the first communication device audio data from the second communication device via the network; and outputting audio based on the received audio data via an audio output device at the first communication device. BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:

Figure 1 schematically shows the basic general architecture of a communication system;

Figure 2 shows the basic general architecture of a communication device;

Figure 3 shows a flow chart illustrating a method of activating different modes at the communication device;

Figure 4 shows a flow chart illustrating a method of using the communication device in a 'connection-enabled' mode; and

Figure 5 shows a flow chart illustrating a method of using the communication device in a 'voice- recognition' mode.

DETAILED DESCRIPTION

Referring to Figure 1 , there is a communication system 1 comprising a plurality of communication headsets 3, where each headset 3 is worn by a different user 4. Each headset 3 comprises a pair of ear-defenders 3', which can be used to protect the user's ears from noise.

Each headset 3 can be connected to another headset 3 using a peer-to-peer networking interface at each headset 3. The headsets 3 may be connected to one another directly or indirectly via another headset 3 (or headsets 3). In this way, the headsets 3 are connected to one another to form a peer-to-peer network, so that the users 4 can communicate with one another. A MESH network, is one type of peer-to-peer network that may be used to connect the plurality of headsets 3 to one another.

A wireless MESH network (IEEE 802.15.4) is an ad-hoc network formed by devices which are in range of one another. It is a peer-to-peer cooperative communication infrastructure in which wireless access points (APs) and nearby devices act as repeaters that transmit data from node to node. In some cases, many of the APs aren't physically connected to a wired network. The APs, and other devices create a mesh with each other that can route data back to a wired network via a gateway.

A wireless mesh network becomes more efficient with each additional network connection. Wireless mesh networks feature a "multi-hop" topology in which data packets "hop" short distances from one node to another until they reach their final destination. The greater the number of available nodes, the greater the distance the data packet may be required to travel. Increasing capacity or extending the coverage area can be achieved by adding more nodes, which can be fixed or mobile. The communication system 1 comprises a peer-to-peer network of headsets 3 which enables communication over the network using short range low power wireless links. This can require considerably less computing and signal transmission power than in other communication devices. In addition, this can allow the headsets 3 to consume less power and to have a simpler and smaller design. The peer-to-peer network may comprise the headsets 3 only. However, in another example, the peer-to-peer network may comprises the headsets 3 as well other devices such as the APs described above.

In the workplace environment, an employee equipped with one of the headsets 3 described herein can be reachable at all times. This may avoid the need for a general announcement system such as a public address (PA) system, which uses one loudspeaker to communicate information to many employees. A PA system may not be appropriate in some situations. Furthermore, the headset 3 can avoid the need for an employee to carry around a conventional mobile telephone.

Preferably, the headset 3 is head-mounted or ear-mounted and is sound reducing, for example, comprising ear defenders 3' for reducing noise level exposure. Thus, users' exposure to noise may be reduced.

Referring to Figure 2, the headset 3 comprises a communication device 5. In this example, the communication device 5 is integrated into one of the ear-defenders 3' of the headset 3. However, the communication device 5 may be integrated into each one of the ear-defenders 3' of the headset 3.

The communication device 5 comprises a peer-to-peer networking interface 7 and an antenna 9. The peer-to-peer networking interface 7 is arranged to establish a connection between the communication device 5 and another similar communication device via a peer-to-peer network, in which the other similar communication device also includes a peer-to-peer networking interface.

The communication device 5 comprises an audio input device 1 1 which is arranged to receive audio from a user wearing the headset 3. Thus, the communication device 5 is able to receive voice input from the user 4.

In the example illustrated in Figures 1 and 2, the audio input device 1 1 is arranged on an arm external to the ear defender 3', so that the audio input device 1 1 can be arranged proximate to the user's mouth. However, in another example the audio input device 1 1 comprises an in-ear microphone which receives amplitude modified user speech signals conducted into the ear canal via bone material, which is referred to as the occlusion effect. In this case, it is the user speech signals received through this occlusion effect which is used for user voice recognition. Here, the frequency spectrum of speech is modified by the occlusion effect causing an elevation of the lower tones. This technique enables ease of user transferability unlike conventional voice recognition systems which requires stored voice samples. The communication device 5 comprises a voice recognition module 13 which is arranged to receive voice inputs from a user 4 via the audio input device 1 1 . The voice recognition module 13 is arranged to store a number of pre-defined voice commands each associated with an action. The voice recognition module 13 is arranged to detect a match between voice input and one of the pre- defined voice commands, and is arranged to perform the action associated with the matching voice command.

The communication device 5 comprises a communication module 15 which is arranged to transmit, via the peer-to-peer networking interface 7 to another communication device, audio received from the user via the audio input device 1 1 . In addition, the communication module 15 is arranged to receive audio data from other communication devices, via the peer-to-peer networking interface 7. Typically, the communication devices 5 can conduct two-way communication between one another. However, the communication device 5 may engage in one way communication with one or many other communication devices.

In addition, the voice recognition module 13 is arranged to control the peer-to-peer networking interface 7 and the communication module 15. For instance, the voice recognition module 13 is arranged to cause the networking interface 7 to initiate establishing the connection between the communication device 5 and another communication device based on audio received from the user via the audio input device 4. The voice recognition module 13 may be arranged to cause the communication module 15 to communicate with another communication device.

The communication device 5 comprises an audio output device 17, such as a speaker, which is arranged to output audio received from the communication module 15 via the peer-to-peer networking interface 7. Thus, the communication device 5 is able to output audio received from a user of another communication device or devices.

The audio input device 1 1 , the communication module 15, the peer-to-peer networking interface 7 and the audio output device 1 7 facilitate two-way communication between users using the communication devices 5.

The communication device 5 further comprises a user-interface switch 1 9 and a control module 21 . In this example, the user-interface switch 1 9 is a pressure sensitive switch 19. However, any other suitable type of switch, control or contact sensor may be used.

The user-interface switch 1 9 and the control module 21 are arranged to activate different modes at the communication module 15. In this example, there is only one user-interface switch 19 for activating different modes at the communication module 1 5. Furthermore, in this example the communication device 5 comprises only one user-interface switch 19.

The control module 21 is arranged to store a number of pre-defined user-interactions with the switch 19. In addition, each pre-defined user-interaction is associated with a different action to be performed at the control module 21 . The control module 21 is arranged to detect a user interaction with the switch 19 and a match between detected user interaction and one of the pre-defined user-interactions. Then, the control module 21 is arranged to perform the action associated with the matching detected user interaction.

The communication module 15 is configured to be able to operate in a plurality of different modes, and the control module 21 is arranged to detect whether one of a plurality of pre-defined user- interactions with the switch has occurred. The control module 21 is arranged to activate the mode associated with the detected user-interaction.

The communication device 5 further comprises an environment audio input device 23, such as an external microphone. The external microphone can be used to detect environmental noise and provide noise cancelling via the audio output device 17. The communication device 5 may provide noise cancelling during communication between devices 5. The communication device 5 may decide to not provide noise cancelling when there is no communication between devices 5.

The communication device 5 further comprises a storage module 24, which is arranged to store an identification parameter for the device. The identification parameter is indicative of a unique identifier for the device 5. The unique identifier may be a number for the device, a title for the user of the device and/or the user's name. This unique identifier may be used so that other communication devices 5 can establish a connection with the communication device 5.

In addition, the storage module 24 may store a database comprising a list of unique identifiers for other communication devices 5 in the peer-to-peer network, where each unique identifier corresponds with a speech label stored at the storage module 24. Each speech label may be indicative of a name, or a label, for the user of the communication device 5 to which the speech label's associated unique identifier corresponds.

Each individual user can be stored in association with a number. For instance, the lowest number, such as 'one', may refer to the most senior person.

Figure 3 shows a flow chart illustrating a method of activating different modes at the communication device 5.

In STEP 300 the communication device 5 is activated, or 'powered-on'. In this case, the communication device 5, more specifically the communication module 15, is configured to operate in a "connection-enabled" mode initially. In the "connection-enabled" mode, the communication module 15 of the communication device 5 is configured to permit transmitting or receiving of audio to or from another communication device.

The voice recognition module 13 may be deactivated, when the communication module 15 is in the connection-enabled mode initially, and the voice recognition module 13 may be configured to be activated only in response to a user interaction with the switch 19. When activated, the voice recognition module 13 is arranged to perform at least one action in response to at least one voice command of a first instruction set stored at the voice recognition module 13.

In STEP 303 the control module 21 detects a user-interaction with the switch 19. In this case, the user wishes to instruct the communication device 5 to enter a "connection-disabled" mode. In order to do this, the user maintains contact with the switch 19, or 'holds' the switch down, for a first time period. In this example, the user holds the switch for over five seconds until the audio output device 17 outputs an audio notification, such as a single 'beep'. Upon hearing the 'beep', the user disengages contact with the switch 19, or 'releases' the switch. The control module 21 detects this interaction with the switch 19 and instructs the communication module 15 to enter the "connection- disabled" mode.

In STEP 305, the communication module 15 enters the connection-disabled mode. In the connection-disabled mode the communication module 15 is not permitted to transmit or receive audio to or from another communication device. In the connection-disabled mode, the peer-to-peer networking interface may not be permitted to establish a connection between the communication device 5 and another communication device via the peer-to-peer network. In addition, in the connection-disabled mode the voice recognition module 13 may be deactivated.

In STEP 307 the control module 21 detects another user-interaction with the switch 19. In this case, the user wishes to instruct the communication device 5 to re-enter the "connection-enabled" mode. In order to do this, the user performs a different user-interaction with the switch 19 compared with the user-interaction in STEP 303. Here, the user maintains contact with the switch 19 for a second time period, for instance two seconds longer than the first period of time.

In this example, the user holds the switch until the audio output device 17 outputs an audio notification, such as a two 'beeps'. Upon hearing the second 'beep', the user knows that they have reached the second time period threshold and can disengage contact with the switch 19, or 'release' the switch. The control module 21 detects this interaction with the switch and instructs the communication module to re-enter the "connection-enabled" mode. Thus, the method returns to STEP 300.

In this example, the user holds the switch for the first time period until the first single beep described in STEP 303 is output. Then, the user continues to hold the switch until the second time period has elapsed, at which point the audio output device 1 7 outputs a second beep. Here, the second time period is seven seconds, which is two seconds longer than the first period. However, the second time period may be any length of time so long as the user is given sufficient time to response to the first beep before the second beep occurs.

In STEP 309 the control module 21 detects another user-interaction with the switch 19. In this case, the user wishes to instruct the communication device 5 to enter a "voice-control" mode. In order to do this, the user performs a different user-interaction with the switch 19 compared with the user-interactions in STEPs 303 and 307. Here, the user contacts with the switch 19 multiple times within a time period. For instance, the user may activate the switch 19 twice within a time period of under five seconds. The control module 21 detects this interaction with the switch 19 and instructs the communication module 15 to enter the "voice control" mode.

In STEP 31 1 , the communication module 15 enters the voice control mode, in which the communication module 1 5 is permitted to transmit or receive audio to or from another communication device. In addition, the voice recognition module 13 is activated when the communication module 15 is in the voice-control mode.

In the voice-control mode the voice recognition module 13 may be arranged to perform a plurality of actions each in response to at least one voice command of a second instruction set. The second instruction set of the voice control mode may comprise a greater number of voice commands than the first instruction set of the connection-enabled mode.

In STEP 313, as in STEP 303, the control module 21 detects a user-interaction with the switch 19 where the user maintains contact with the switch 1 9 for over five seconds until the audio output device 17 outputs a 'beep' at which point the user disengages contact with the switch 1 9. As before, the control module 21 detects this interaction with the switch and instructs the communication module to re-enter the "connection-disabled" mode. Thus, the method returns to 305.

In STEP 315, as in STEP 307, the control module 21 detects a user-interaction with the switch 19 where the user maintains contact with the switch 19, for the second time period until the audio output device 1 7 outputs two 'beeps' at which point the user disengages contact with the switch 19. The control module 21 detects this interaction with the switch and instructs the communication module to re-enter the "connection-enabled" mode. Thus, the method returns to STEP 300.

Figure 4 shows a flow chart illustrating a method of using the communication device 5 in the 'connection-enabled' mode.

As mentioned previously, in the connection enabled mode the communication module 15 of the communication device 5 is permitted to transmit or receive audio to or from another communication device. Thus, in STEP 400 the peer-to-peer networking interface 7 is in a waiting state where it checks to determine whether or not there is an incoming call from another communication device, or in other words a request for a connection to be made between the communication device 5 and another communication device. In addition, in the waiting state the control module 21 checks to determine whether or not there is a user-interaction with the switch 19 whilst there is not an incoming call. If there is a user-interaction with the switch 19 whilst there is not an incoming call, the method proceeds to STEP 402.

In STEP 402, control module 21 detects an interaction with the switch 19. In this case, the user wishes to provide a command to the voice recognition module 13. In order to do this, before speaking the voice command, the user maintains contact with the switch 1 9, for a time period, for instance less than five seconds. The control module 21 detects this interaction with the switch and, in response, activates the voice recognition module 13.

In STEP 404 voice recognition module 13 detects a voice command provided by the user. The voice recognition module identifies voice commands by detecting reserved words. The voice commands are verified by a pause preceding and following the command. For instance, the pause preceding and following the command may be a few seconds.

For instance, the user may say "CALL SUPERVISOR". Next, in STEP 406 the voice recognition module 13 determines the action associated with the voice command. Then, the voice recognition module 13 outputs a confirmation request, via the audio output device 17. In this example, the confirmation request comprises outputting audio indicative of the determined action. For instance, the output may comprise repeating the voice command "CALL SUPERVISOR".

In this example, the "SUPERVISOR" voice command may be described as a label associated with another communication device. In another example, the label may comprise a name for a user associated with the other communication device.

As described above, each user's contact name, title or number is associated with his/her communication device 5. When a user initiates a call, a message is broadcast to the peer-to-peer network for identifying the requested communication device 5. The requested communication device 5 responds and a connection is established between the calling and the receiving communication device 5.

In response to the confirmation request, the voice recognition module 13 waits for the user to provide a confirmation. The user may provide the confirmation by saying an affirmative voice command, for instance by saying "yes". In this case, the method proceeds to STEP 408.

On the other hand the user may decline the confirmation by saying a negative voice command, for instance by saying "no". In this case, the method returns to STEP 400.

In STEP 406, if the voice recognition module 13 fails to recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the method returns to STEP 400. If the repeat command is successful the method proceeds to 408.

In STEP 408 the voice recognition module 13 causes the action associated with the voice command, input at STEP 404, to be performed. In this case, the voice recognition module 13 causes the peer-to-peer networking interface 7 to initiate the process of establishing a connection with a communication device associated with the supervisor.

In STEP 400 the peer-to-peer networking interface 7 checks to determine whether or not there is an incoming call from another communication device. If there is an incoming call the method proceeds to STEP 410 in which a notification is output, preferably at the audio output device 17, indicating to the user that there is an incoming call.

In STEP 412 the control module 21 checks to determine whether or not the user engages the switch 1 9. If the user engages the switch, for less than one second, in response to the incoming call the method proceeds to STEP 414, in which a connection is established between the communication device 5 and another communication device in the peer-to-peer network.

In STEP 41 6, the control module 21 determines that the user has engaged the switch 19 for less than five seconds, indicating that the user wishes to terminate the call. In STEP 418, in response to this user interaction, the control module 21 instructs the peer-to-peer networking interface 7 to disconnect the communication device 5 with the other communication device.

In STEP 420, the control module 21 checks to determine whether the switch 19 has been engaged within ten seconds of outputting the incoming call notification. If the user has not provided an interaction with the switch within this ten second time period, the method proceeds to STEP 424 in which the incoming call request is cancelled.

However, in STEP 422, if the control module 21 determines that the switch 19 has been engaged for a time period in excess of five seconds during the ten second time period, then the incoming call request is cancelled also.

Figure 5 shows a flow chart illustrating a method of using the communication device 5 in the 'voice- recognition' mode. The purpose of the voice-recognition mode is that the user can perform all required functions using voice commands rather than interacting with the switch 1 9. Thus, the voice recognition module 13 remains active whilst in the voice recognition mode.

In STEP 500 the user provides a voice command. Next, in STEP 502 the voice recognition module 13 detects that the user has provided the voice command and determines an action associated with the voice command.

In STEP 504, the voice recognition module 13 outputs a confirmation request, via the audio output device 17. The confirmation comprises outputting audio indicative of determined action.

In response to the confirmation request, the voice recognition module 13 waits for the user to provide a confirmation. The user may accept the confirmation by saying an affirmative voice command, for instance by saying "YES". In this case the method proceeds to STEP 505 in which the action associated with the voice command is performed.

On the other hand the user may decline the confirmation request by saying a negative voice command, for instance by saying "NO". In this case, the method returns to STEP 500.

In STEP 502, if the voice recognition module 13 fails to recognise the voice command, for instance if the voice recognition module 13 cannot recognise the name of the person to be called, it prompts an appropriate audible notification for a repeat command. If the repeat command is unsuccessful the voice recognition module simply waits for another voice command at STEP 500.

In this example the following voice commands are available in the voice recognition mode.

If the voice recognition module 13 detects that the user has said "HANG-UP", whilst a call is in session between the communication device 5 and another communication device, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to disconnect the communication device 5 from the other connected communication device.

If the voice recognition module 13 detects that a user has said "PICK-UP", in response to an incoming call request, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to connect the communication device 5 with the other connected communication device requesting the call.

If the voice recognition module 13 detects that a user has said "DECLINE", in response to an incoming call request, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to refuse a request to connect the communication device 5 with the other connected communication device requesting the call.

If the voice recognition module 13 detects that a user has said "CALL" followed by the name of a contact, the voice recognition module 13 instructs the peer-to-peer networking interface 7 to initiate a request to connect the communication device 5 with another connected communication device associated with the contact.

If the user voice recognition module 13 detects that a user has said "EXIT", then the voice recognition module 13 instructs the communication module 15 to enter the connection-enabled mode.

In some examples, the communication system 1 may comprise one or more fixed network nodes in addition to the plurality of communication headsets 3. The fixed nodes may be connected to and cooperate with the mobile nodes formed by the plurality of communication headsets 3 to form the peer-to-peer network, such as a MESH network.

In the system 1 , user settings for each communication device 5 can be controlled by a device connected to the peer-to-peer network such as a MESH network, such as a computer or a MESH network enabled smartphone. In some examples this device connected to the peer-to-peer network may be connected to the peer-to-peer network through a fixed node. One of the user setting options could include a sound pressure threshold above which the user's speech is detected and processed into instructions for execution by the voice recognition module. Otherwise, settings would normally reflect user preferences for an optimum listening experience.

Access to cloud computing applications such as private clouds for company infrastructure services may be accessed by communication devices 5 via a gateway. This can include communication links to other sites for secure inter-site calls including conference calls. In some examples the gateway may be connected to the peer-to-peer network through a fixed node.

The wireless network such as a mesh network (WMN) connects to a secure central database via a gateway containing employees' routing requirements for setting up wireless communication links. In some examples the gateway may be connected to the peer-to-peer network through a fixed node.

When fitting the headset 3 to the ear of a user the pressure sensitive switch 19 may be engaged accidentally. To combat this issue, the communication device 5 may comprise a sensor, such as an acoustic in-ear sensor, arranged to determine that the communication device has not been mounted on an ear of a user; and, in response, ignoring any user interactions with the switch 19. When the headset 3 is correctly mounted the occlusion effect attenuates the external sound entering the ear canal thereby creating a difference in sounds levels measured by the acoustic in- ear sensor and external acoustic sensors.

The acoustic in-ear sensor may determine that communication device 5 has not been mounted if it receives audio above a particular attenuated amplitude of the amplitude measured by the externally mounted sensor 23, and the acoustic in-ear sensor may determine that the communication device 5 has been mounted if the amplitude of the received audio falls below that particular amplitude.

In the connection-enabled mode, and in the absence of streamed wireless audio of any kind for a certain time period, the communication device 5 may power down into a 'beacon mode'. In the beacon mode, the peer-to-peer networking interface 7 periodically checks for messages/activations and sends out a unique identifier which can be used to determine the device's location before returning to a sleep state. In the beacon mode, the communication device alternates between an active state and a dormant state, where a greater amount of the functionality of first communication device is activated in the active state than in the dormant state.

The communication module 15 of the communication device 5 may be configured to operate in an override mode in which the communication device 5 is able to transmit audio for output at another communication device irrespective of the mode activated at the other communication device. This enables a supervisor/manager to have connection priority to the user's device by automatically forcing acceptance of a connection request. This option could include termination of a call by the supervisor exclusively. The override mode may allow the communication device 5 to transmit audio for output at a plurality of other communication devices irrespective of the mode activated at each respective communication device. Thus, the override mode may be used in place of an conventional public address (PA) system.

A wireless mesh network as described above can be used for wireless peer to peer connectivity. However, in another embodiment each communication device 5 comprises a low power sub-GHZ ISM band radio that does not depend on a mesh network for wide area coverage and connects wirelessly to a remote hub without the need for hopping from node to node.

The communication system may include P2P group functions where the supervisor/manager is given the option of group ownership which may extend to multiple concurrent P2P groups using Wi- Fi or other such technology, or a group communication system (GCS) where the network is divided into optional sub-groups.

The illustrated examples show a single communication system, for simplicity. In other examples a plurality of communication systems may be connected together or interconnected by a network infrastructure to provide communication links between different interconnected groups of devices, which groups may be remotely located. In some examples the plurality of communication systems may be connected together or interconnected by a network infrastructure such as infrastructure meshing with client meshing (P2P).

The methods described herein may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible (or non-transitory) storage media include disks, thumb drives, memory cards etc and do not include propagated signals. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously. This acknowledges that firmware and software can be valuable, separately tradable commodities. It is intended to encompass software, which runs on or controls "dumb" or standard hardware, to carry out the desired functions. It is also intended to encompass software which "describes" or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.

The term 'computer' or 'computing device' is used herein to refer to any device with processing capability such that it can execute instructions. Those skilled in the art will realise that such processing capabilities are incorporated into many different devices and therefore the term 'computer' or 'computing device' includes PCs, servers, mobile telephones, personal digital assistants and many other devices.

Those skilled in the art will realise that storage devices utilised to store program instructions can be distributed across a network. For example, a remote computer may store an example of the process described as software. A local or terminal computer may access the remote computer and download a part or all of the software to run the program. Alternatively, the local computer may download pieces of the software as needed, or execute some software instructions at the local terminal and some at the remote computer (or computer network). Those skilled in the art will also realise that by utilising conventional techniques known to those skilled in the art that all, or a portion of the software instructions may be carried out by a dedicated circuit, such as a DSP, programmable logic array, or the like.

Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.

It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages.

Any reference to 'an' item refers to one or more of those items. The term 'comprising' is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.

The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought. Any of the module described above may be implemented in hardware or software.

It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the scope of this invention.




 
Previous Patent: AUDITORY DEVICE ASSEMBLY

Next Patent: NOISE DOSIMETER