Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
COMMUNICATION APPARATUS, METHOD AND COMPUTER PROGRAM
Document Type and Number:
WIPO Patent Application WO/2017/141057
Kind Code:
A1
Abstract:
A communication apparatus for a subject unable to speak or make a purposeful gesture comprises at least one sensor arranged, in use, to detect modulations in the subject's breathing, a signal processor arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature, and communication means arranged to perform an action associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature. A computer-implemented method of communicating via breathing modulations is also disclosed.

Inventors:
GAUR ATUL (GB)
KERR DAVID (GB)
BOUAZZA-MAROUF KADDOUR (GB)
LUCAS ALASTAIR (GB)
Application Number:
PCT/GB2017/050432
Publication Date:
August 24, 2017
Filing Date:
February 20, 2017
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
UNIV HOSPITALS OF LEICESTER NHS TRUST (GB)
International Classes:
G06K9/00; A61B5/08; G10L13/04
Domestic Patent References:
WO2006066337A12006-06-29
WO2003000125A12003-01-03
Other References:
VANESSA CHARLAND-VERVILLE ET AL: "Detection of response to command using voluntary control of breathing in disorders of consciousness", FRONTIERS IN HUMAN NEUROSCIENCE, vol. 8, 23 December 2014 (2014-12-23), pages 1 - 5, XP055366237, DOI: 10.3389/fnhum.2014.01020
A. PLOTKIN ET AL: "Sniffing enables communication and environmental control for the severely disabled", PROCEEDINGS NATIONAL ACADEMY OF SCIENCES PNAS, vol. 107, no. 32, 26 July 2010 (2010-07-26), US, pages 14413 - 14418, XP055366427, ISSN: 0027-8424, DOI: 10.1073/pnas.1006746107
Attorney, Agent or Firm:
CORK, Robert (GB)
Download PDF:
Claims:
- ι5 -

Claims

1. A communication apparatus for a subject unable to speak or make a purposeful gesture, the apparatus comprising:

at least one sensor arranged, in use, to detect modulations in the subject's breathing;

a signal processor arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature; and

communication means arranged to perform an action associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature.

2. The communication apparatus of claim l, wherein the signal processor is further arranged to only detect peaks in the wavelet representation with an amplitude higher than a threshold value.

3. The communication apparatus of claim 2, wherein the signal processor is arranged to set the threshold value as a predefined fraction of the maximum peak amplitude in the wavelet representation of the breath signal. 4. The communication apparatus of claim 1, 2 or 3, wherein the signal processor is arranged to use a weighted centroid method to detect the at least one peak.

5. The communication apparatus of any one of the preceding claims, wherein the signal processor is arranged to use a k nearest neighbour KNN method to determine whether the modulations in the subject's breathing match the predefined breath signature.

6. The communication apparatus of any one of the preceding claims, further comprising:

a signal pre-processor arranged to smooth the breath signal obtained from the at least one sensor and/or subtract a DC offset from said breath signal.

7. The communication apparatus of any one of the preceding claims, wherein the action performed by the communication means comprises controlling a cursor to move in a predefined direction on a display unit.

8. The communication apparatus of any one of claims 1 to 6, wherein the action performed by the communication means comprises generating speech associated with the predefined breath signature. 9. The communication apparatus of claim 8, further comprising:

one or more speakers arranged to reproduce the speech generated by the communication means.

10. The communication apparatus according to any one of the preceding claims, wherein the apparatus further comprises a face mask to which the at least one sensor is connected, which face mask is, in use, placed over the subject's nose, mouth and/or trachea.

11. The communication apparatus according to any one of the preceding claims, wherein the modulations in the subject's breathing comprise modulations in the pressure and/ or timing of the breathing.

12. The communication apparatus according to claim 11, wherein the modulations in the pressure and/or timing of a subject's breathing maybe in the form of amplitude, frequency and/ or phase.

13. The communication apparatus according to any one of the preceding claims, wherein, in use, the apparatus is first calibrated in order to establish a benchmark or control.

14. Use of the apparatus according to any one of the preceding claims in diagnosis or therapy.

15. A computer-implemented method of communicating via breathing

modulations, the method comprising: performing a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor arranged, in use, to detect modulations in the subject's breathing;

detecting at least one peak in the wavelet representation of the breath signal; 5 determining whether the modulations in the subject's breathing match a

predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature; and

performing an action associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature.

o

16. A non-transitory computer-readable storage medium arranged to store computer program instructions which, when executed, perform a method according to claim 15.

Description:
Communication Apparatus, Method and Computer Program

Technical Field

The present invention relates to a communication apparatus, method and computer program, and in particular to a communication apparatus which assists patients who are unable to otherwise communicate or make purposeful gestures because they have suffered a loss of voluntary muscle function, which affects their speech-producing mechanisms and gestures, for example patients suffering from 'Locked-in Syndrome'. The invention extends to methods of communicating using the apparatus, and to uses of the apparatus for diagnosing a patient's condition.

Background

Various medical conditions may render a subject unable to communicate or make purposeful gestures due to a loss of voluntary muscle function. One example of such a condition is locked-in syndrome, which is a condition in which a patient is aware and awake, but cannot move or communicate verbally or by making purposeful gestures due to complete paralysis of nearly all voluntary muscles in the body, except for the eyes. Total locked-in syndrome is a condition where the eyes may also be paralysed. Locked-in syndrome is also known as cerebromedullospinal disconnection, de- efferented state, pseudocoma, and ventral pontine syndrome. It is extremely rare for any significant motor function to quickly return to such patients, and they find it extremely difficult, if not impossible, to produce speech or make purposeful gestures, such as smile, raise eye brows, sniff, whistle or write. At present, communicating with locked-in patients is almost impossible. Indeed, it can even be difficult for doctors to distinguish between, and therefore diagnose, patients who are in a coma or vegetative state, and those that appear to be, but are in fact fully conscious and aware of their surroundings, i.e. those suffering from locked-in syndrome. Other examples of conditions that can render a subject unable to make a purposeful gesture include, but are not limited to, asphasia, stroke, and dysarthria.

Currently, no apparatus is available for producing speech that is specifically designed to work with patients with complete or partial loss of voluntary muscle control affecting communication in the intensive care unit (ICU) or home. There is therefore a need for an improved communication system to assist such patients.

The invention is made in this context. Summary of the Invention

According to the present invention, there is provided a communication apparatus for a subject unable to speak or make a purposeful gesture, the apparatus comprising: at least one sensor arranged, in use, to detect modulations in the subject's breathing; a signal processor arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature; and communication means arranged to perform an action associated with the predefined breath signature.

Tests conducted by the inventors have demonstrated that the apparatus of the invention allows a person to communicate by using breath modulations. This will allow a subject or patient, who is otherwise unable to make a purposeful gesture (such as smile, raise eye brows, sniff, whistle or write etc.), to communicate with medical staff and family members, and thereby greatly improve their quality of life. Advantageously, operation of the apparatus is simple to learn, and the apparatus will work in

conjunction with most forms of spontaneous modes of ventilation, or invasive ventilatory support when spontaneous respiratory efforts are present, for example when a tracheostomy tube has been inserted into the subject's trachea.

Advantageously, and preferably, there does not need to be any unique coding sequence that forces the subject to modulate their breathing in a particular way in order to communicate. Instead, the apparatus will allow the subject to construct, over a "learning period," their own symbolic language of breathing modulation patterns that represent ideas, words, phrases, etc. according to their own physical ability and personal preference. The subject thus will be able to build their own vocabulary aided by the apparatus, instead of being forced to conform to a pre-coded or rule-based communication format.

It will be understood that patients require not only the use of voluntary control, but also significant muscular activity or power in order to produce a sniff, laugh or speech. Accordingly, some patients maybe unable to generate or produce these actions at all as they can be too weak. Advantageously, detecting the breath modulations in the form of changing breathing pattern using the apparatus of the invention is mainly dependent on voluntary control. Since no additional effort is required for a subject to modulate their pre-existing spontaneous breathing patterns, the apparatus of the invention can enable a subject to communicate regardless of their level of control over muscular activity or power.

The apparatus may therefore be arranged to be used by a patient who cannot speak due to a loss of voluntary muscle function, which affects their speech-producing

mechanisms and gestures, for example a patient who cannot otherwise communicate because they have suffered a neurological injury, or who cannot communicate effectively as they have had some kind of physical injury or muscle weakness. For example, in some embodiments the apparatus may be arranged to be used by a patient suffering from 'Locked-in Syndrome'.

The modulations in the subject's breathing maybe detected via the upper airway, including the nose, nasal cavity, mouth, oral cavity, nasopharynx and larynx and/ or anywhere in the breathing circuit, for example tracheostomy tube. The apparatus may comprise a face mask to which the at least one sensor is connected, where the face mask is, in use, placed over the subject's nose, mouth and/or trachea. In embodiments, where the sensor detects modulations in the subject's breathing through the nose, the apparatus may comprise a nasal cannula, which, in use, is inserted into a nasal orifice; and where the sensor detects modulations in the subject's breathing through a breathing circuit, the apparatus may comprise of a tube connector, which, in use, is inserted or connected to the breathing circuit. In some embodiments, the communication apparatus of the first aspect may comprise at least one sensor arranged, in use, to detect modulations in a subject's breathing, characterised in that the detected modulations are not detected via the nose, in particular the nasal cavity or the nasopharynx. The apparatus maybe arranged in use to detect modulations in the subject's breathing via the mouth, oral cavity and larynx and/ or anywhere in the breathing circuit, for example tracheostomy tube.

In some embodiments, the signal processor is further arranged to only detect peaks in the wavelet representation with an amplitude higher than a threshold value. The signal processor maybe arranged to set the threshold value as a predefined fraction of the maximum peak amplitude in the wavelet representation of the breath signal, for example 50% of the maximum peak amplitude. In some embodiments, the signal processor is arranged to use a weighted centroid method to detect the at least one peak. In some embodiments, the signal processor is arranged to use a k nearest neighbour KNN method to determine whether the modulations in the subject's breathing match the predefined breath signature.

In some embodiments, the communication apparatus further comprises a signal pre- processor arranged to smooth the breath signal obtained from the at least one sensor and/or subtract a DC offset from said breath signal.

In some embodiments, the action performed by the communication means comprises controlling a cursor to move in a predefined direction on a display unit. For example, the cursor may be controlled to spell out arbitrary words or phrases, or may be controlled to select other user interface elements such as images corresponding to certain activities or requests.

In some embodiments, the action performed by the communication means comprises generating speech associated with the predefined breath signature. The speech generated by the communication means can take different forms. In some

embodiments, the communication apparatus further comprises one or more speakers arranged to reproduce the speech generated by the communication means as audio output. Alternatively, or in combination with audio output, the communication means can be configured to output a different representation of the generated speech, for example by generating textual representation of the speech. The generated text can then be displayed on a screen or sent to an external device, for example, via a network connection or other suitable communications link. The communication apparatus may further comprise a face mask to which the at least one sensor is connected, which face mask is, in use, placed over the subject's nose, mouth and/or trachea.

The modulations in the subject's breathing may comprise modulations in the pressure and/ or timing of the breathing. For example, the modulations in the pressure and/ or timing of a subject's breathing maybe in the form of amplitude, frequency and/or phase.

In some embodiments, in use, the apparatus is first calibrated in order to establish a benchmark or control.

Advantageously, the apparatus may be used as a diagnostic tool for helping doctors to distinguish between suspected brain dead patients (e.g. coma or vegetative state), and those that are conscious in ambiguous circumstances, such as locked-in patients (with intact spontaneous respiratory/breathing activity which may be adequate not requiring any ventilatory support, or inadequate requiring ventilatory support).

According to a second aspect of the invention, there is provided use of the apparatus according to the first aspect in diagnosis or therapy.

According to a third aspect of the invention, there is provided a computer-implemented method of communicating via breathing modulations, the method comprising:

performing a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor arranged, in use, to detect modulations in the subject's breathing; detecting at least one peak in the wavelet representation of the breath signal; determining whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature; and performing an action associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature.

According to a fourth aspect of the invention, there is provided a non-transitory computer-readable storage medium arranged to store computer program instructions which, when executed, perform a method according to the third aspect and/or any other methods disclosed herein.

All of the features described herein (including any accompanying claims, abstract and drawings), and/ or all of the steps of any method or process so disclosed, may be combined with any of the above aspects in any combination, except combinations where at least some of such features and/ or steps are mutually exclusive. Brief Description of the Drawings

Embodiments of the present invention will now be described, byway of example only, with reference to the accompanying drawings, in which:

Figure ι schematically illustrates apparatus for a subject unable to speak or make a purposeful gesture, according to an embodiment of the present invention;

Figure 2 illustrates locations on a patient where pressure and flow signatures can be measured, according to an embodiment of the present invention;

Figure 3 is a graph showing an example of a breath signal outputted by the sensor of the apparatus shown in Fig. l, according to an embodiment of the present invention;

Figure 4 is a flowchart showing a method of generating speech from breathing modulations, according to an embodiment of the present invention; and

Figure 5 illustrates an example of a wavelet representation of the breath signal shown in Fig. 3, according to an embodiment of the present invention.

Detailed Description

Referring now to Fig. 1, a communication apparatus is schematically illustrated according to an embodiment of the present invention. The apparatus can be configured for use by a subject who is unable to speak or make a purposeful gesture. In the present embodiment, the communication apparatus comprises at least one sensor 102 arranged, in use, to detect modulations in the subject's breathing. An example of a breath signal outputted by the sensor of the apparatus shown in Fig. 1 is illustrated in Fig. 3. In the present embodiment the apparatus comprises a single sensor 102, but in other embodiments a plurality of sensors may be used. The apparatus further comprises a signal processor 106 arranged to perform a continuous wavelet transform to obtain a wavelet representation of a breath signal received from the at least one sensor, detect at least one peak in the wavelet representation of the breath signal, determine whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature.

Depending on the embodiment, the signal processor 106 may be implemented in hardware or software. In the present embodiment a software implementation is used, and the signal processor 106 comprises a processing unit which may include one or more general-purpose processors. The processing unit 106 is configured to execute computer program instructions stored in memory 108, which can be any suitable non- transitory computer-readable storage medium. When executed, the computer program instructions cause the processing unit 106 to perform any of the methods disclosed herein. An example of a method performed by the processing unit 106 is described later with reference to Fig. 4.

Depending on the quality of the breath signal provided by the sensor 102, the apparatus may optionally comprise a signal pre-processing unit 104 arranged to smooth the breath signal obtained from the at least one sensor and/ or subtract a DC offset from said breath signal, and sending the pre-processed breath signal to the signal processor 106. In the present embodiment the signal pre-processing unit 104 is arranged to smooth the breath signal using a 5-point moving average filter, and subtract the signal mean in order to remove the DC offset. The filtering may be carried out real-time in order to remove any high frequency noise and to improve the signal quality. The apparatus further comprises a communication unit in the form of speech generating means 110 arranged to generate speech associated with the predefined breath signature if the modulations in the subject's breathing match said predefined breath signature. The communication unit can comprise any suitable mechanism for enabling the subject to communicate, that is, any mechanism capable of performing an action which can convey information. The generated speech may comprise single words (e.g. yes/no), sentences or expressions etc. Depending on the embodiment, the speech generating means 110 may be physically separate to the signal processor 106, or when a software implementation is used the speech generating means 110 may comprise further software instructions executed on the same general-purpose processing unit as is used for the signal processor 106.

In the present embodiment the speech generator is arranged to generate speech by retrieving predefined speech segments from a speech database 112. In the present embodiment the speech database 112 is included in local storage within the apparatus, however, in another embodiment the speech generator 110 may be arranged to communicate with a remote database. For example, the speech database may be stored on an Internet server and accessed via any suitable network connection. The speech database 112 is arranged to store various predefined speech segments, each associated with a different predefined breath signature. In the present embodiment the speech segments are stored in the form of pre-recorded sound files, and the apparatus further comprises a speaker 114 arranged to reproduce the speech generated by the speech generating means.

In other embodiments, instead of retrieving pre-recorded audio files the speech generator 110 may generate speech in a different manner. For example, speech segments may be stored as text files, which can be converted into audible speech by the speech generator 110 using a suitable text-to-speech algorithm. Alternatively, instead of outputting audible speech via a speaker, in another embodiment the speech generator 110 may be arranged to output a textual representation of the generated speech to a display unit and/ or another device.

In yet another embodiment, instead of generating speech by retrieving predefined speech from a database, the communication unit 110 may be arranged to perform a another type of action in response to a particular breathing pattern. For example, the communication unit 110 may be arranged to control an on-screen cursor on a display unit to move in a certain direction when a particular breathing pattern is detected. In this way the patient may modulate their breathing pattern to move the cursor, for example to point to on-screen user interface (UI) elements and/or to spell words or to select phrases from an on-screen graphical interface. This embodiment enables a subject to control the communication unit 110 to generate arbitrary speech rather than selecting pre-programmed words or phrases, by using the cursor to spell out arbitrary words or phrases as necessary. In yet a further embodiment, the apparatus may not necessarily generate speech but can include a UI comprising a plurality of images representing different activities or requests, for example sleep/food/family etc., and the subject can communicate by controlling the cursor to move to the desired activity or request.

Referring now to Fig. 2, various locations on a patient where pressure and flow signatures can be measured are illustrated, according to an embodiment of the present invention. In the present embodiment, the sensor 102 of the communication apparatus is configured to be connected via a tube 7 to an existing valve on a standard face mask 5 placed over the patient's nose 16 and mouth 18, or to an outlet valve 22 which is provided on a tracheostomy tube 23, which is connected to the patient's trachea 20. The sensor 102 can thereby detect the changes, deviations and/ or modifications in the patient's breathing pattern, for example modulations in the pressure and/ or timing of the breathing. The apparatus can work with or without the use of a ventilator (not shown). When no ventilator is used, the patient's breathing patterns can be detected by a nasal cannula 15, which is inserted at the nasal orifice 16 and/or breathing through the mouth via the face mask 5. However, when a ventilator is used, the patient's breathing can be detected directly through the trachea 20 via the tracheostomy tube 23 and associated valve 22 or breathing circuit.

The modulations in the pressure and/ or timing of a subject's breathing may be in the form of amplitude, frequency and/ or phase. The magnitude of the modulations will depend on the breathing circuit being monitored, as well as on the patient's medical condition and capability. For example, when using a continuous positive airway pressure (CPAP) mask to aid breathing, typical normal breath periods may be about 2.7 seconds, whilst faster modulation rates may have a period of about 1.2 seconds.

Therefore, in one embodiment, the modulations that are detected may comprise changes in the pressure of the air that is breathed in and/or out by the subject.

Measuring pressure change is advantageous because it can be measured outside of the respiratory/ventilation system, a practical method when interfacing with a ventilator, whilst the much smaller flow changes are more complicated to measure, requiring instrumentation within the respiratory system. Air pressure may be detected using any commonly available pressure sensor. For example, the change in air pressure for normal breathing whilst using a face mask under CPAP ventilation may be between 4 and 18 cm H2O, with a respiration rate of around 20 breaths per minute.

Preferably, in use, the apparatus is first calibrated during normal breathing in order to establish a benchmark or control. The calibration may be performed at a certain time of day and/or while the subject is performing a certain activity. For example, calibration may be performed while the subject is awake, or asleep, or having meals. In this way, the calibrated benchmark may provide an indication of the subject's normal breathing pattern during routine activity. From this, it is possible to determine when the subject has modulated their breathing, for example by pausing or holding their breath. Preferably, the apparatus is capable, in use, of locating and extracting maximum and minimum pressure data from the signal, which preferably corresponds to air pressure of the subject's breathing. The oscillatory nature of the subject's respiration means that the breathing period for a breath can be calculated by peak detection of pressure minima and maxima. Thus, in one embodiment, the apparatus may be capable, in use, of determining breath period or breath frequency from the signal, as a function of time and in the form of a signal spectrum or multi-resolution scale/position data. For example, a multi-resolution technique may comprise wavelet transforms of Hidden Markov Models. Preferably, detecting pressure changes over time produces a wavelet, and wavelet analysis may then be used to locate frequency modulations within the breathing cycle. Thus, the apparatus is preferably arranged to detect frequency modulation of the normal breathing signal in terms of pressure verses time.

Referring now to Fig. 4, a flowchart is illustrated showing a method of generating speech from breathing modulations, according to an embodiment of the present invention. Depending on the embodiment, the method can be performed by hardware such as a field programmable gate array (FPGA) or application-specific integrated circuit (ASIC), or may be implemented by software instructions executed on a processor, as described above with reference to Fig. 1.

The method shown in Fig. 4 can be performed on a signal received directly from the sensor, or after performing pre-processing steps such as smoothing and DC offset subtraction. First, in step S401 the signal processor 106 performs a continuous wavelet transform to obtain a wavelet representation of the breath signal received from the sensor 102. In the present embodiment a Daubechies 4 (db4) wavelet scaled 1 to 128 is used, however, in other embodiments different forms of wavelet may be used in the continuous wavelet transform, for example a Morlet wavelet. Following the continuous wavelet transform, a wavelet representation of the breath signal is obtained. The wavelet representation comprises a 2D scale/space map of wavelet amplitude peaks.

Next, in step S402 the signal processor 106 detects any peaks in the wavelet representation of the breath signal. To identify a particular breath signature, one or more peaks are required. In the present embodiment, the signal processor is arranged to apply thresholding to the wavelet representation so that only peaks with an amplitude higher than a threshold value are detected. Thresholding allows the peaks to be isolated in the scale/space domain. Figure 5 illustrates the 2D wavelet map of the breath signal shown in Fig. 3 after applying thresholding at 50% of the maximum peak amplitude. In other embodiments a different threshold may be set, for example a fixed amplitude or a different fraction of the maximum peak amplitude. In the present embodiment, the signal processor 106 is arranged to determine coordinates of each peak in the wavelet space by using a weighted centroid method. In other embodiments, any other suitable method of identifying a location of the peak within the scale/ space domain may be used. The scale/ space coordinates obtained from the wavelet map shown in Fig. 5 using a weighted centroid approach are shown in the following table:

After determining the location of the one or more peaks in the wavelet representation of the breath signal, in step S403 the signal processor 106 determines whether the modulations in the subject's breathing match a predefined breath signature by comparing a location of the at least one detected peak to a known location of at least one peak in the predefined breath signature. For example, the detected peak locations may be compared against known peak locations for a plurality of different predefined breath signatures store in a speech database, as described above.

The signal processor 106 may use any suitable pattern recognition algorithm to check whether the detected breath modulations match one of the predefined breath signatures. In the present embodiment the signal processor 106 is arranged to use a k nearest neighbour (KNN) method to determine whether the modulations in the subject's breathing match the predefined breath signature. In the present embodiment a value of k=3 is used, although in other embodiments a different value may be chosen. The optimum value of k may depend on the size of the training set and the

dimensionality of the problem, that is, the number of segmented features.

Investigations by the inventors have found that setting k=3 or 5 ensures reliable detecting of different breathing patterns for a relatively small number of predefined breathing patterns, whereas setting k to a higher value decreases the reliability.

However, the signal processor may be arranged to set a higher value of k when it is necessary to distinguish between a larger number of breathing patterns, that is, when the number of predefined breath signatures increases.

Furthermore, embodiments of the invention are not limited to a KNN pattern recognition method. In other embodiments a different type of pattern recognition algorithm may be used, for example a decision tree, neural network, support vector machine, fuzzy logic, and so on. In some embodiments, a combination of multiple different pattern recognition methods may be used.

If a match to a breath signature is found in step S403, then in step S404 the speech generator generates the speech associated with the matched breath signature. On the other hand, if no match is found at step S403, then no speech is generated.

The process of generating speech from a measured breath signal, as described above with reference to Fig. 4, may be referred to as the 'translation phase'. In an

embodiment of the invention, the communication apparatus is initially configured during a 'learning phase', in which a database of known breath signatures is assembled using a training method as follows. Firstly, the user records a modulated breath signal over a fixed sampling period, for example 10 seconds. In the present embodiment, this signal is recorded at a sampling rate of 100 Hertz (Hz) to give a total of 1000 samples of pressure. Then, the user repeats this process a number of time using the same purposely-made breath signal, for example, at least a further four times to obtain five recordings. Next, the five recordings are processed using steps S401 and S402 of the method shown in Fig. 4 to build up a database of predefined breathing patterns.

During the learning phase, a pattern recognition algorithm similar to the one used in step S403 may be applied to check that the reference patterns being generated are sufficiently different to one another to be capable of being distinguished. After processing the sound recordings using steps S401 to S403, a word or phrase is selected to be associated with this group of signals. The word or phrase may be preprogrammed, or may be user-defined. The process can be repeated to set up as many words/phrases as necessary.

The learning phase may further include a step of verifying the database for data quality and veracity using a "leave one out" cross-verification process, whereby if any one class is in danger of being non-unique, then the user is prompted to choose a different breath pattern for that class or to record a new set of samples for that pattern.

During use of the communication apparatus, the modulations detected by the sensor may include modulations in the timing of the breathing the subject as they hold their breath for a defined period of time. In the example shown in Fig. 3, the subject increases and then decreases the frequency of their breathing so as to match a predefined breath signature. The breath signature illustrated in Fig. 3 is merely exemplary, and should not be seen as being limiting. Embodiments of the invention can recognise any type of low-noise breath pattern, and are not limited to detecting patterns which comprise regularised, ordered bursts of frequency such as the example shown in Fig. 3.

The communication apparatus of the invention can act as a diagnostic tool for helping doctors to distinguish between suspected brain dead patients (coma or vegetative state) who are either breathing spontaneously or have intact spontaneous breathing efforts which are supported by a ventilator, and those that are conscious in ambiguous circumstances, such as locked-in patients. The communication apparatus also allows speech communication in the ICU between patients and the outside world, with particular application to those who are unable to communicate due to impaired speech production mechanism and loss of ability to make purposeful gestures with or without breathing support on ventilators.

The inventors have shown that the communication apparatus can be effectively used to allow patients to communicate by the simple modulation of their breathing patterns, for example by hyperventilating or holding their breath. Minute voluntary changes in the breathing circuit such as pressure, flow or phase/time (i.e. holding the breath or pausing) can initiate and maintain a dialogue between patient and outside world.

Further advantages of the communication apparatus are:- Its applicability to patients on spontaneous ventilation including those who are on ventilator support;

• The use of modulation of breathing patterns as a form of trigger to speech production by computerized system; and

• The use of a pattern recognition algorithm to adapt a communications

protocol to a patient's individual breathing characteristics and clinical condition. The communication apparatus will help a wide range of patients from Locked-in syndrome to patients who are unable to verbally communicate for any reason with communication and diagnosis.

Whilst certain embodiments of the invention have been described herein with reference to the drawings, it will be understood that many variations and modifications will be possible without departing from the scope of the invention as defined in the

accompanying claims.