Login| Sign Up| Help| Contact|

Patent Searching and Data


Title:
SIGNAL PROCESSOR FOR DETERMINING AN ALERTNESS LEVEL
Document Type and Number:
WIPO Patent Application WO/2013/008150
Kind Code:
A1
Abstract:
The present invention relates to a signal processor (10) and method for determining an alertness level of a user. The signal processor (10) is adapted to receive a respiration signal (12) of a user (1), the respiration signal (12) having an amplitude over time, detect at least one yawn event (16) and/or speech event based on the respiration signal (12), and determine an alertness level (14) of the user based on the at least one detected yawn event (16) and/or detected speech event.

Inventors:
WEFFERS-ALBU MIRELA ALINA (NL)
MUEHLSTEFF JENS (NL)
DE WAELE STIJN (US)
BEREZHNYY IGOR (NL)
Application Number:
PCT/IB2012/053434
Publication Date:
January 17, 2013
Filing Date:
July 05, 2012
Export Citation:
Click for automatic bibliography generation   Help
Assignee:
KONINKL PHILIPS ELECTRONICS NV (NL)
WEFFERS-ALBU MIRELA ALINA (NL)
MUEHLSTEFF JENS (NL)
DE WAELE STIJN (US)
BEREZHNYY IGOR (NL)
International Classes:
A61B5/087; A61B5/113; A61B5/16; A61B5/18
Domestic Patent References:
WO1994024935A11994-11-10
WO2009040711A22009-04-02
Foreign References:
US20070282227A12007-12-06
US20100152600A12010-06-17
JP2010204984A2010-09-16
JP2010155072A2010-07-15
US7397382B22008-07-08
Attorney, Agent or Firm:
VAN EEUWIJK, Alexander, H., W. (AE Eindhoven, NL)
Download PDF:
Claims:
CLAIMS:

1. A signal processor (10) for determining an alertness level of a user, the signal processor (10) adapted to:

- receive a respiration signal (12) of a user (1), the respiration signal (12) having an amplitude over time,

- detect at least one yawn event (16) and/or speech event based on the respiration signal (12), and

- determine an alertness level (14) of the user based on the at least one detected yawn event (16) and/or detected speech event. 2. The signal processor of claim 1, adapted to detect the at least one yawn event by detecting an amplitude peak if the amplitude of the respiration signal exceeds a preset threshold (17).

3. The signal processor of claim 2, wherein the preset threshold (17) is selected to be less than an amplitude of a speech event and/or an amplitude of a normal breathing of the user (1).

4. The signal processor of claim 1, adapted to determine an amplitude frequency distribution of the amplitudes over time, or its histogram representation, of at least a part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution, or its histogram representation.

5. The signal processor of claim 4, wherein the part of the respiration signal comprises exactly one yawn event.

6. The signal processor of claim 1, adapted to detect the at least one yawn event

(16) and/or speech event using a machine learning algorithm.

7. The signal processor of claim 6, adapted to receive at least one respiration training signal selected from the group comprising a respiration training signal indicative of normal breathing of the user (1), a respiration training signal indicative of yawning of the user (1), and a respiration training signal indicative of speech of the user (1).

8. The signal processor of claim 6, adapted to use a clustering technique to determine a yawn event cluster (32) and/or a speech event cluster (34) using the at least one respiration training signal. 9. The signal processor of claim 1, adapted to determine the alertness level (14) based on at least one criterion selected from the group comprising determination of a low alertness level if an amount or a frequency of detected yawn events (16) is above a preset threshold, determination of a medium alertness level if both at least one yawn event (16) and at least one speech event are detected, and determination of a high alertness level if only at least one speech event is detected and no yawn event is detected.

10. A system (100) for determining an alertness level of a user, the system (100) comprising:

- the signal processor (10) of claim 1, and

- a respiration sensor (20) providing the respiration signal of the user (1).

11. The system of claim 10, wherein the respiration sensor (20) is a radar based respiration sensor. 12. The system of claim 11 , wherein the radar based respiration sensor (20) is disposed on or integrated into a seat belt (21) wearable by the user (1) or a steering wheel.

13. The system of claim 10, further comprising a feedback unit (30) adapted to provide feedback to the user (1) based on the determined alertness level (14).

14. A method for determining an alertness level of a user, the method comprising:

- receiving a respiration signal (12) of a user (1), the respiration signal (12) having an amplitude over time, - detecting at least one yawn event (16) and/or speech event based on the respiration signal (12), and

- determining an alertness level (14) of the user based on the at least one detected yawn event (16) and/or detected speech event.

15. A computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 14 when said computer program is carried out on the computer.

Description:
Signal processor for determining an alertness level

FIELD OF THE INVENTION

The present invention relates to a signal processor and a method for determining an alertness level of a user, in particular to detect drowsiness of a user. The present invention further relates to a system comprising such signal processor and a computer program for implementing such method.

BACKGROUND OF THE INVENTION

There are two general ways of detecting drowsiness (or fatigue) of a user, in particular in the automotive context for detecting drowsiness of a driver. On the one hand, there are techniques that focus on the car behavior and/or context information to determine the state of the driver. These techniques can be inaccurate as they do not focus on the user (e.g. driver), but on the car and/or the context. Furthermore, these techniques provide relevant information only when the driver consistently does not have the car under full control for a certain time duration, meaning that traffic risk has already been high for some time.

On the other hand, there are techniques that focus on the user (e.g. driver) to determine the state of the user. For example, US 7,397,382 B2 discloses a drowsiness detecting apparatus having a pulse wave sensor and a determination circuit. The sensor is provided to a steering wheel to detect a pulse wave of a vehicle driver gripping the steering wheel. The determination circuit generates a thorax pressure signal indicative of the depth of breathing by envelope-detecting a pulse wave signal of the sensor and determines whether the driver is drowsy by comparing a pattern of the thorax pressure signal with a reference pattern. A depth of breathing of a person is detected, and drowsiness of the person is determined when the depth of breathing falls in a predetermined breathing condition including at least one of a sudden decrease in the depth of breathing and a periodic repetition of deep breathing and shallow breathing.

However, this drowsiness detection might not be reliable in a number of situations, in particular not reliable in all possible situations that can occur, for example in a situation with a high noise level. SUMMARY OF THE INVENTION

It is an object of the present invention to provide a signal processor and a method for determining an alertness level of a user, in particular for detecting drowsiness of the user, that provides a more reliable determination or detection in particular a robust detection or determination. It is a further object of the present invention to provide a system comprising such signal processor and a computer program for implementing such method.

In a first aspect of the present invention a signal processor for determining an alertness level of a user is presented, the signal processor is adapted to receive a respiration signal of a user, the respiration signal having an amplitude over time, to detect at least one yawn event and/or speech event based on the respiration signal, and to determine an alertness level of the user based on the at least one detected yawn event and/or detected speech event.

In a further aspect of the present invention a system for determining an alertness level of a user is presented, the system comprises the signal processor of the invention, and a respiration sensor providing the respiration signal of the user.

In a further aspect of the present invention a method for determining an alertness level of a user is presented. The method comprises receiving a respiration signal of a user, the respiration signal having an amplitude over time, detecting at least one yawn event and/or speech event based on the respiration signal, and determining an alertness level of the user based on the at least one detected yawn event and/or detected speech event.

In yet a further aspect of the present invention a computer program comprising program code means for causing a computer to carry out the steps of the method of the invention when said computer program is carried out on the computer.

The basic idea of the invention is to detect yawn event(s) and/or speech event(s) based on a respiration signal provided by a respiration sensor, and to determine the alertness level of the user based on the detected yawn event(s) and/or detected speech event(s). With yawn event it is meant that the respiration signal indicates that the user is yawning. With speech event it is meant that the respiration signal indicates that the user is speaking. A yawn event and a speech event each are a signal anomaly in the respiration signal. In particular, if a low alertness level is determined, drowsiness of the user can be detected. For example, if a yawn event is detected, it is determined that the alertness level of the user is low. For example, if a speech event is detected, it is determined that the alertness level of the driver is high. In particular, both a yawn event and a speech event can be detected based on one single respiration signal or respiration sensor. Thus, only one respiration sensor is needed to reliably detect the alertness level of the user. Each of a yawn event and a speech event can be clearly distinguished from normal breathing of a user based on the respiration signal. In particular, a classification of the respiration signal or pattern can be performed, thereby classifying into a yawn event, a speech event, or normal breathing.

For example , in the automotive context, the use of the respiration signal to determine the state of the user or the alertness level is particularly advantageous for detecting drowsiness of the user (e.g. driver) early in time, for example before the user already lost full control over the car he/she is driving due to fatigue. Compared to for example the use of a camera, using a respiration sensor for measuring or providing a respiration signal of the user, in order to detect yawn event(s) and/or speech event(s), has at least one of the following advantages: usability at night time, insensitivity to changes in illumination, insensitivity to special movements of the user (e.g. covering his/her mouth while yawning or speaking), and insensitivity to clothing of the user (e.g. wearing thick winter clothes).

Preferred embodiments of the invention are defined in the dependent claims. It shall be understood that the claimed method and computer program have similar and/or identical preferred embodiments as the claimed signal processor or system and as defined in the dependent claims.

In one embodiment the signal processor is adapted to detect the at least one yawn event by detecting an amplitude peak if the amplitude of the respiration signal exceeds a preset threshold. This provides an easy and computationally inexpensive way of detecting the yawn event in the respiration signal.

In a variant of this embodiment the preset threshold is selected to be less than an amplitude of a speech event and/or an amplitude of a normal breathing of the user. In this way, a precise distinction in the respiration signal between yawning of the user and other activities of the user, such as speaking or normal breathing, can be made.

In a further embodiment the signal processor is adapted to determine an amplitude frequency distribution of the amplitudes over time, or its histogram representation, of at least a part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution, or its histogram representation. The histogram representation is a representation of the amplitude frequency distribution of the amplitudes over time. In other words, the amplitude frequency distribution can be the distribution of the frequency of occurrence of different amplitudes (amplitude values) in different ranges. A frequency distribution is thus a statistical measure to analyse the distribution of amplitudes of the respiration signal. In particular, the shape of a histogram representation of the frequency distribution can be determined, and the at least one yawn event and/or speech event can be detected based on the shape of the histogram representation. This provides for an easy and reliable detention.

In a variant of this embodiment the part of the respiration signal comprises exactly one yawn event. In this way a time window sized to only measure exactly one yawn event can be used. Exactly one yawn event can be reliably detected based on the amplitude frequency distribution, or its histogram representation, in particular the shape of its histogram representation. The part of the respiration signal (or time window) can be in the range of an average time of a yawn event (or equal to or bigger than an average time of a yawn event), for example between 3 and 8 second, or about 5 seconds.

In another embodiment, the signal processor is adapted to determine at least one feature of at least part of the respiration signal, the at least one feature selected from the group comprising an amplitude frequency distribution (or its histogram representation), an average number of respiration cycles, a median number of respiration cycles per epoch, an inclination coefficient (showing whether the respiration rate goes up or down), an average of amplitudes of respiration cycles, a median of amplitudes of respiration cycles, and an inclination coefficient (showing whether the amplitude goes up or down).

In a further embodiment the signal processor is adapted to detect the at least one yawn event and/or speech event using a machine learning algorithm. This provides for a reliable detection. This embodiment can in particular be used in combination with the embodiment of determining an amplitude frequency distribution (or its histogram

representation). The amplitude frequency distribution (or its histogram representation) can be used as an input for the machine learning algorithm. Also, at least one, in particular a number of, features of the previous embodiment can be used as an input for the machine learning algorithm. In particular, the signal processor can be adapted to determine a multi-dimensional feature vector based on (at least part of) the respiration signal, wherein the dimension corresponds to the number of features. In a variant of this embodiment the signal processor is adapted to receive at least one respiration training signal selected from the group comprising a respiration training signal indicative of normal breathing of the user, a respiration training signal indicative of yawning of the user, and a respiration training signal indicative of speech of the user. In this way an adaptive system can be provided.

In a further variant of this embodiment the signal processor is adapted to use a clustering technique to determine a yawn event cluster and/or a speech event cluster using the at least one respiration training signal. This provides for an easy classification and/or visual representation of the classification. In a further embodiment the signal processor is adapted to determine the alertness level based on at least one criterion selected from the group comprising

determination of a low alertness level if an amount or a frequency of detected yawn events is above a preset threshold, determination of a medium alertness level if both at least one yawn event and at least one speech event are detected, and determination of a high alertness level if only at least one speech event is detected and no yawn event is detected. This provides for a reliable determination of the alertness level of the user, in particular for detecting drowsiness of the user.

In a further embodiment the respiration sensor is a radar based respiration sensor. This provides an unobtrusive way of measuring the respiration signal of the user. Compared to a standard respiration detection, such as for example a body-worn respiration band or glued electrodes (e.g. used in a hospital), the radar based respiration sensor provides increased usability and comfort. It is therefore particularly suitable for consumer applications (e.g. in a car). The radar-based respiration sensor is a contactless sensor. Thus, it is in particular suitable to be embedded in a small sized system.

In a variant of this embodiment the radar based respiration sensor is disposed on or integrated into a seat belt wearable by the user or a steering wheel. This way, the respiration signal of the user can be unobtrusively measured, when the user is for example sitting in a car seat having a seat belt on.

In a further embodiment the system further comprising a feedback unit adapted to provide feedback to the user based on the determined alertness level. In a variant of this embodiment, the feedback unit is adapted to provide feedback if the determined alertness level is below a preset value. In this way, a warning can be provided to the user, in particular if it is determined that the alertness level is too low, for example when the user is drowsy and is about to fall asleep.

BRIEF DESCRIPTION OF THE DRAWINGS

These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter. In the following drawings

Fig. 1 shows a schematic representation of a system for determining an alertness level of a user according to an embodiment;

Fig. 2 shows a schematic representation of a user wearing a seat belt having a respiration sensor of a system according to an embodiment; Fig. 3 shows a diagram of a respiration signal indicating normal breathing of a user;

Fig. 4 shows a diagram of a respiration signal indicating speech of a user; Fig. 5 shows a diagram of a respiration signal indicating yawning of the user; Fig. 6 shows a diagram of an amount of time variance between local minima in a respiration signal for four different users;

Fig. 7 shows a diagram of an amount of amplitude variance in a respiration signal for four different users;

Fig. 8 shows a diagram of a respiration signal having yawn events detected by a signal processor, system or method according to a first embodiment;

Fig. 9 shows a diagram of a respiration signal, when the user speaks;

Fig. 10 shows an exemplary respiration signal;

Fig. 11 shows a first part of an exemplary respiration signal of Fig. 10, having exactly one yawn event;

Fig. 12 shows a histogram representation of the part of the respiration signal of

Fig. 11, used by a signal processor, system or method according to a second embodiment;

Fig. 13 shows a second part of the respiration signal of Fig. 10, having a speech event;

Fig. 14 shows a histogram representation of the part of the respiration signal of Fig. 13, used in a signal processor, system or method according to the second embodiment;

Fig. 15 shows a diagram of clusters obtained by a signal processor, system or method according to the second embodiment;

Fig. 16 shows a respiration signal having yawn events and the mapping of the yawn events to points in the yawn event cluster of Fig. 15;

Fig. 17 shows a flow diagram of a method for determining an alertness level of a user according to an embodiment; and

Fig. 18 shows a flow diagram of a method for determining an alertness level of the user according to another embodiment.

DETAILED DESCRIPTION OF THE INVENTION

Fig. 1 shows a schematic representation of a system for determining an alertness level of a user according to an embodiment of the present invention. The system 100 comprises a signal processor 10, and a respiration sensor 20 measuring or providing a respiration signal 12 of the user 1. The respiration signal 12 is transmitted from the respiration sensor 20 to the signal processor 10. The signal processor 10 receives the respiration signal 12 from the respiration sensor 20. The signal processor 10 detects at least one yawn event and/or speech event based on the respiration signal 12. The determination of the yawn event and/or speech event can in particular be performed in real-time. The signal processor 10 determines an alertness level 14 of the user 1 based on the at least one detected yawn event 16 and/or detected speech event.

In particular, the signal processor can perform or be adapted to perform a classification of the respiration signal or pattern into a yawn event, a speech event, or normal breathing. This can for example be performed by a respiration pattern classifier component. The classification can in particular be performed in real-time. The alertness level can then be determined based on the classification into yawn event, speech event or normal breathing. This can for example be performed by an alertness classifier component. The respiration pattern classifier component and/or the alertness classifier component can be part of or implemented in the signal processor.

The system further comprises a feedback unit 30 adapted to provide feedback to the user 1 based on the determined alertness level 14. The signal processor 10 transmits the alertness level 14 to the feedback unit 30. The feedback unit 30 receives the alertness level 14 from the signal processor 10. The feedback unit 30 is adapted to provide feedback to the user 1, if the determined alertness level 14 is below a preset value. In this way, a warning can be provided to the user 1, for example when the user 1 is drowsy and is about to fall asleep.

Fig. 2 shows a schematic representation of a user 1 wearing a seatbelt 21. The seatbelt 21 can in particular be a seatbelt 21 of a seat in a car. A respiration sensor 20 is integrated into the seatbelt 21, which is worn by the user 1. The seatbelt 21 here refers to safety seatbelt designed to secure the user 1 against harmful movement that may result from a collision or a sudden stop. The seatbelt 21 is intended to reduce injuries by stopping the user 1 from hitting hard interior elements of the vehicle or other passengers and by preventing the user 1 from being thrown from the vehicle. In this embodiment, the respiration sensor 20 is a radar-based respiration sensor, in particular a Doppler radar-based respiration sensor. Since the radar-based respiration sensor is a contactless sensor, it is in particular suitable to be embedded in a small sized system like the seatbelt 21. It will be understood that the radar- based respiration sensor can also be disposed on or integrated in any other suitable object, such as for example a steering wheel, portable device or a mattress.

The radar-based respiration sensor 20 is used to measure and provide the respiration signal of the user 1. This approach enables to monitor breathing-related thorax motion, thus breathing of the user, as well as context information, such as the activity of the user. The radar-based respiration signal 20 is adapted to transmit electro -magnetic waves which are reflected at the chest wall of the user and undergo a Doppler- frequency shift, if the chest wall of the user 1 is moving due to respiration of the user 1. Therefore, the received signal measured by the radar-based respiration sensor 20 contains information about the thorax motion. The Doppler-radar signal for a single target, which is a good approximation of the thorax of the user, is given by: x(t) = a(t) · cos(0( ) ( Ί )

The amplitude a(t) can be assumed to be constant, thus a(t) = a 0 , since only small distance changes are considered, for example in the centimeter range. This is due to breathing and the beating heart, disregarding large movements of the user. The phase term in the equation (1) above can be ex ressed as: where λ is the wavelength of the transmitted waves, and Ξ is the sensor-thorax distance for t = 0. In this example, the sum term in the equation above consists of four terms due to four different motions that are considered. First, the breathing motion (amplitude A of 5 mm to 30 mm at 0,1 Hz to 0,8 Hz), second the beating heart, (typically less than 5 mm at 0,5 Hz to 3 Hz), third the user's global motion and fourth - if applicable - movement of the sensor itself. For an ideal measurement situation with breathing motion only and a perfect estimation of the phase term, equation (2) above reduces too:

D(t) = ^ ( » + x(t))

λ (3).

Fig. 3 shows a diagram of a respiration signal 12 indicating normal breathing of a user. Fig. 4 shows a diagram of a respiration signal 12 indicating speech of a user. Fig. 5 shows a diagram of a respiration signal 12 indicating yawning of a user. Each diagram shows the amplitude of the respiration signal over time. These diagrams are results of an

experiment, where the respiration signal 12 of a user was measured during non-eventful breathing and eventful breathing. Non-eventful breathing is meant to be normal breathing or also called baseline session. Eventful breathing is meant to be a yawning session, during which the user is quiet, but yawns, and/or a speech session, during which the user speaks (e.g. reading a passage from a book to simulate a discussion in a car). As can be seen in Fig. 5, yawn events 16 can clearly be distinguished in the respiration signal 12. For illustration and simplification purposes only four yawn events of the plurality of yawn events 16 are marked by a circle in Fig. 5.

Fig. 6 shows a diagram of an amount of time variance between local minima in a respiration signal for four different users. Fig. 7 shows a diagram of an amount of amplitude variance in a respiration signal for four different users. For each user normal breathing (baseline session), yawning (yawn session) and speech (speech session) was investigated. Thus, Fig. 6 and Fig. 7 are based on an evaluation the three types of respiration signals shown in Fig. 3 to 5. Fig. 6 and Fig. 7 are here used for mere illustration purposes, to show that a differentiation between non-eventful and eventful breathing is possible. As can be seen in Fig. 6, for all users the time variance between local minima in the respiration signal during the non-eventful breathing (normal breathing or baseline session) is significantly lower than the same time variance during the eventful breathing sessions (yawning session and speech session). As can be seen in Fig. 7, for all users the amplitude variance of the respiration signal during the non-eventful breathing (normal breathing or baseline session) is significantly lower than the amplitude variance of the respiration signal during the eventful breathing (yawning session and speech session).

Fig. 7 also shows that for all participants the amplitude variance of the respiration signal during the yawning session is significantly higher than the amplitude variance of the respiration signal during the speech session. This is due to the fact that yawns involve deep inhalations, much deeper than when the user speaks, which results in the fact that the minima peaks of the respiration signal have much lower values in the case of a yawning event than in the case of a speech event.

Fig. 8 shows a diagram of a respiration signal 12 having yawn events 16 (yawning session) detected by a signal processor, system or method according to a first embodiment of the present invention. In this embodiment, the signal processor is adapted to detect the at least one yawn event 16 by detecting an amplitude peak if the amplitude of the respiration signal 12 exceeds a preset threshold 17. The preset threshold 17 is selected to be less than an amplitude of a speech event and an amplitude of a normal breathing of the user. Fig. 9 shows a diagram of a respiration signal, when the user speaks (speech session). In Fig. 9 the same preset threshold 17 is indicated as in Fig. 8. As can be seen in Fig. 9, the respiration signal 12 has only speech and no yawn events are detected. This is due to the fact that the preset threshold 17 is selected to be less than the amplitude of a speech event.

Now, a second embodiment of the present invention will be explained with reference to Fig. 10 to Fig. 16. This embodiment can be used as an alternative to the previous first embodiment described in connection with Figs. 8 and 9. This embodiment can also be used in addition to the first embodiment described above in connection with Fig. 8 and 9.

Fig. 10 shows an exemplary respiration signal 12. In this embodiment, the signal processor is adapted to determine at least one feature of the respiration signal. In this example, one of the at least one feature is an amplitude frequency distribution of the amplitudes over time (or its histogram representation) of at least a part of the respiration signal 12. The histogram representation is a representation of the amplitude frequency distribution of the amplitudes over time. In other words, the amplitude frequency distribution can be the distribution of the frequency of occurrence of different amplitudes (amplitude values) in different ranges. The signal processor is adapted to determine the amplitude frequency distribution (or its histogram representation) of at least the part of the respiration signal, and to detect the at least one yawn event and/or speech event based on the determined amplitude frequency distribution (or its histogram representation).

Fig. 11 shows a first part of a respiration signal of Fig. 10. Fig.13 shows a second part of the respiration signal of Fig. 10. In the respiration signal part of Fig. 11 a yawn event 16 is present, whereas in the respiration signal part of Fig. 13 no yawn event is present. The part of the respiration signal in Fig. 11 comprises exactly one yawn event 16. In this way, a time window sized to cover only exactly one yawn event can be applied to the respiration signal 12.

Fig. 12 shows a histogram representation of the part of the respiration signal of Fig. 11. Fig. 14 shows a histogram representation of the part of the respiration signal of Fig. 13. When comparing Fig. 12 to Fig. 14, it can clearly be seen that the amplitude frequency distribution (or its histogram representation) of the respiration signal part having the yawn event (Fig. 11) can clearly be distinguished from the amplitude frequency distribution (or its histogram representation) of the respiration signal part having no yawn event (Fig. 13). In Fig. 12 which shows the histogram representation of the respiration signal part having the yawn event 16, there is activity in the lower bins of the histogram representation. This indicates lower amplitudes in the respiration signal part, as can be seen in Fig. 11. In the histogram representation of Fig. 14, no activity in the lower bins is present. Thus, there are no lower amplitudes in the respiration signal part, as shown in Fig. 13. In this way, the yawn event 16 can be detected based on the amplitude frequency distribution, or its histogram representation), in particular the shape of the histogram representation. Therefore, the difference between the histogram representations, as shown in each of Fig. 12 and Fig. 14, can be used for an automated classification of the respiration signal into yawn event(s). In the same way as explained with reference to Fig. 11 to 14, speech event(s) can be detected in the respiration signal. Furthermore, also other events can be manifested in the respiration signal in this way.

An example of a classification into a yawn event, speech event and normal breathing will now be explained with reference to Fig. 15. Fig. 15 shows a diagram of clusters obtained by a signal processor, system or method according to the second

embodiment of the present invention. The signal processor is adapted to detect the at least one yawn event 16 and/or speech event using a machine learning algorithm. The signal processor is adapted to receive the at least one respiration training signal (or reference signal) selected from the group comprising a respiration training signal indicative of normal breathing of the user, a respiration training signal indicative of yawning of the user, and a respiration training signal indicative of a speech of the user. A clustering technique can then be used to determine a yawn event cluster 32 and a speech event cluster 34 using the at least one respiration training signal, as shown in Fig. 15. Further, additional clusters can be determined, such as normal breathing cluster 36 shown in Fig. 15. In this experiment, a user was asked to breath normally (baseline session), yawn (yawning session), and speak (speech session) for a few seconds in order to determine the respiration training signal indicative of normal breathing of the user, the respiration training signal indicative of yawning of the user, and the respiration training signal indicative of speech of the user. These respiration training signals are then used at run-time to distinguish between normal breathing, yawning and speech.

In particular, in this embodiment the signal processor is adapted to perform the machine learning algorithm based on at least one feature of the respiration signal. In this case one of the at least one feature is the amplitude frequency distribution (or its histogram representation) as explained in connection with Fig. 10 to 14. The amplitude frequency distribution (or its histogram representation) is used as an input for the machine learning algorithm. The machine learning algorithm can be based on a number of features. In the experiment in connection with Fig. 15, ten features were used. In this way a multidimensional feature vector can be determined based on (at least part of) the respiration signal, wherein the dimension corresponds to the number of features, thus in this example a 10- dimensional feature vector. Fig. 15 shows a two-dimensional projection of this 10- dimensional feature vector. It will be understood that any other number of features can be used. Examples of such feature are for example an average number of respiration cycles, a median number of respiration cycles per epoch, an inclination coefficient (showing whether the respiration rate goes up or down), an average of amplitudes of respiration cycles, a median of amplitudes of respiration cycles, and an inclination coefficient (showing whether the amplitude goes up or down). However, it will be understood that any other suitable feature can be used. In particular, the combination of features can be such that both yawn events and speech events can be reliably detected.

Fig. 16 shows a respiration signal 12 having yawn events 16 and the mapping of the yawn events 16 to points in the yawn event cluster 32 of Fig. 15. As can be seen in Fig. 16, each point of the yawn event cluster 32 corresponds to one of the yawn events 16 in the respiration signal 12. Thus, the results of the unsupervised clustering as shown in Fig. 15 by applying a machine- learning algorithm are successful. In this way, a fully adaptive system for detection of yawn event(s) and/or speech event(s) based on the respiration signal can be provided.

Fig. 17 shows a flow diagram of a method for determining an alertness level of a user according to an embodiment, and Fig. 18 shows a flow diagram of a method for determining an alertness level of a user according to another embodiment. By detecting yawn event(s), speech event(s) or classifying into normal breathing, yawn event or speech event, the alertness level of the user can be determined.

In the embodiment of Fig. 17, in an initial step 101 a respiration signal is received. Then, in step 102, it is determined if at least one yawn event is detected. In particular, it can be determined if a specific amount for a specific frequency of yawn events has been detected. If yawn event(s) has or have been detected, the method turns to step 104 of determining if at least one speech event is detected. If at least one speech event has been detected, the alertness level is determined to be a medium alertness level 108. Thus, a medium alertness level is detected, if both at least one yawn event and at least one speech event are detected. If at least one yawn event has been detected, but not speech event is detected, the alertness level is determined to be a low alertness level 107. In particular, a low alertness level 107 can be detected if the amount or frequency of detected yawn events is above a preset threshold. If a frequency of yawn events is increasing, this means that the user is yawning more often. Returning to step 102, if the result of the determination is that no yawn event is detected, in step 103 it is then determined if at least one speech event is detected. If at least one speech event is detected (no yawn event detected) the alertness level is determined to be a high alertness level 106. If no speech event is detected (and no yawn event is detected) the alertness level is determined to be neutral, e.g. indicating normal breathing 105.

In the embodiment of Fig. 18, in an initial step 111 at least one respiration training signal is received, in particular the respiration training signals previously described. In another step 112 the current respiration signal of the user is received. Then in step 113 yawn event, speech event and/or normal breathing is detected or classified based on the respiration signal using a machine learning algorithm. In particular, at least one or a number of features can be used as an input for the machine learning algorithm. In particular, a multidimensional feature vector based on (at least part of) the respiration signal can be determined, wherein the dimension corresponds to the number of features. If at least one speech event is detected and no yawn event is detected, indicated by step 114, the alertness level is determined to be a high alertness level 106. If it is determined that both at least one yawn and at least one speech event are detected, the alertness level is determined to be a medium alertness level 108, indicated by step 115. If it is determined that at least one yawn event and no speech event is detected, it is determined that the alertness level is a low alertness level 107, indicated by step 116. In particular, in step 116 it can be determined if an amount or frequency of detected yawn events is above a preset threshold, as previously explained.

The present invention can in particular be used in the automotive context for detecting drowsiness of a driver. Drowsiness of a driver can be detected, when the alertness level of the user is determined to be low. However, it will be understood that the present invention cannot only be applied in an automotive context, but in any other suitable context that requires a high alertness of the user, for example in a plane, a hospital or industrial shift working. Another example is the consumer lifestyle domain, for example for relaxation or sleep application.

While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.

In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.

A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Any reference signs in the claims should not be construed as limiting the scope.